Datasets:
Andy Janco
commited on
Commit
·
10417ce
1
Parent(s):
2dee5ba
Update README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,23 @@ Using [makesense.ai](https://www.makesense.ai/) and a custom active learning app
|
|
59 |
## Dataset
|
60 |
|
61 |
With the trained and fine-tuned model, we generated predictions for each of the images in the collection. The dataset contains an entry for each image with the following fields:
|
62 |
-
- filename, the
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
|
65 |
'filename', 'dpul', 'journal', 'year', 'issue', 'uri', 'yolo', 'yolo_predictions', 'text', 'images_meta'
|
|
|
59 |
## Dataset
|
60 |
|
61 |
With the trained and fine-tuned model, we generated predictions for each of the images in the collection. The dataset contains an entry for each image with the following fields:
|
62 |
+
- filename, the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.
|
63 |
+
- dpul, the URL for the image's journal in Digital Princeton University Library
|
64 |
+
- journal, the journal name
|
65 |
+
- year, the year of the journal issue
|
66 |
+
- issue, the issue for the image
|
67 |
+
- URI, the IIIF URI used to fetch the image from Princeton's IIIF server
|
68 |
+
- yolo, the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (<object-class> <x> <y> <width> <height>). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.
|
69 |
+
- yolo_predictions, a List with a dictionary for each of the model's predictions with fields for:
|
70 |
+
- label, the predicted label
|
71 |
+
- x, the x-value location of the center point of the prediction
|
72 |
+
- y, the y-value location of the center point of the prediction
|
73 |
+
- w, the total width of the prediction's bounding box
|
74 |
+
- h, the total height of the prediction's bounding box
|
75 |
+
- abbyy_text, the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data
|
76 |
+
- tesseract_text, the text extracted from the predicted document segment using Tesseract.
|
77 |
+
- vision_text, the text extracted from the predicted document segment using Google Vision.
|
78 |
+
|
79 |
|
80 |
|
81 |
'filename', 'dpul', 'journal', 'year', 'issue', 'uri', 'yolo', 'yolo_predictions', 'text', 'images_meta'
|