Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
Tags:
ocr
License:
Pclanglais
commited on
Commit
•
3188667
1
Parent(s):
851dd5a
Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ This initial agregation was made possible thanks to the extensive open data prog
|
|
20 |
|
21 |
The composition of the dataset adheres to the US criteria for public domain of collective works (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in all countries with a Berne author-right model.
|
22 |
|
|
|
|
|
23 |
## Uses
|
24 |
The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.
|
25 |
|
@@ -29,8 +31,10 @@ The collection also aims to expand the availability of open works for the traini
|
|
29 |
The entire collection is in the public domain everywhere and has been digitized by a US federal entity.
|
30 |
|
31 |
## Future developments
|
32 |
-
This dataset is not a one time work but will continue to evolve significantly on
|
33 |
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
|
34 |
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models (like COLAF) on the original PDF files.
|
35 |
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.
|
36 |
|
|
|
|
|
|
20 |
|
21 |
The composition of the dataset adheres to the US criteria for public domain of collective works (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in all countries with a Berne author-right model.
|
22 |
|
23 |
+
The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated version of the same resource, with significant enhancements of text quality and documentation. It currently retains about 10-20% of the original material.
|
24 |
+
|
25 |
## Uses
|
26 |
The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.
|
27 |
|
|
|
31 |
The entire collection is in the public domain everywhere and has been digitized by a US federal entity.
|
32 |
|
33 |
## Future developments
|
34 |
+
This dataset is not a one time work but will continue to evolve significantly on several directions:
|
35 |
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
|
36 |
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models (like COLAF) on the original PDF files.
|
37 |
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.
|
38 |
|
39 |
+
The American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern.
|
40 |
+
|