XingjianL commited on
Commit
ab3c3a1
1 Parent(s): 058f64f

update readme

Browse files
Files changed (2) hide show
  1. README.md +117 -1
  2. example_load.py +88 -0
README.md CHANGED
@@ -10,4 +10,120 @@ tags:
10
  - plant-disease
11
  size_categories:
12
  - 10K<n<100K
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - plant-disease
11
  size_categories:
12
  - 10K<n<100K
13
+ ---
14
+
15
+ # Dataset Summary
16
+ ## Table of Contents
17
+ - [Dataset Description](#dataset-description)
18
+ - [Dataset Summary](#dataset-summary)
19
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
20
+ - [Languages](#languages)
21
+ - [Dataset Structure](#dataset-structure)
22
+ - [Data Instances](#data-instances)
23
+ - [Data Fields](#data-fields)
24
+ - [Data Splits](#data-splits)
25
+ - [Dataset Creation](#dataset-creation)
26
+ - [Curation Rationale](#curation-rationale)
27
+ - [Source Data](#source-data)
28
+ - [Annotations](#annotations)
29
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
+ - [Social Impact of Dataset](#social-impact-of-dataset)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Other Known Limitations](#other-known-limitations)
34
+ - [Additional Information](#additional-information)
35
+ - [Dataset Curators](#dataset-curators)
36
+ - [Licensing Information](#licensing-information)
37
+ - [Citation Information](#citation-information)
38
+ - [Contributions](#contributions)
39
+
40
+ ## Dataset Description
41
+
42
+ ### Dataset Summary
43
+
44
+ This dataset consist of 21,384 2448x2048 pairs of synthetic images for tomato plants. Each pair consist of left/right RGBD, and panoptic segmentation labels for the left image.
45
+
46
+ ### Supported Tasks and Leaderboards
47
+
48
+ - `image-segmentation`: Both panoptic and semantic labels for separating tomato plants and identifying features and disease types in the dataset.
49
+ - `depth-estimation`: ground truth depth values for stereo and monocular applications.
50
+
51
+ ### Languages
52
+ English
53
+
54
+ ## Dataset Structure
55
+
56
+ ### Data Instances
57
+ Each datapoint consist of 6 images:
58
+ ```
59
+ {
60
+ 'left_rgb': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2448x2048 at 0x7F63FB5F4350>,
61
+ 'right_rgb': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2448x2048 at 0x7F63FF2B3950>,
62
+ 'left_semantic': <PIL.PngImagePlugin.PngImageFile image mode=L size=2448x2048 at 0x7F63FC4488C0>,
63
+ 'left_instance': <PIL.TiffImagePlugin.TiffImageFile image mode=I;16 size=2448x2048 at 0x7F63FC497EF0>,
64
+ 'left_depth': <PIL.TiffImagePlugin.TiffImageFile image mode=F size=2448x2048 at 0x7F63FACF6E70>,
65
+ 'right_depth': <PIL.TiffImagePlugin.TiffImageFile image mode=F size=2448x2048 at 0x7F63FACF7560>
66
+ }
67
+ ```
68
+
69
+ ### Data Fields
70
+ - 'left_rgb': Left RGB image, was compressed to 95\% quality.
71
+ - 'right_rgb': Right RGB image, was compressed to 95\% quality. Note the baseline is 3.88112 cm and HFOV is 95.452621 degrees.
72
+ - 'left_semantic': Rendered colors that denotes the RGB label for individual pixels. See `example_load.py` for classes and sample scripts.
73
+ - 'left_instance': Rendered colors that denotes the tomato plant instances for individual pixels.
74
+ - 'left_depth': Rendered left depth compressed to 16-bit floats (in centimeters).
75
+ - 'right_depth': Rendered right depth compressed to 16-bit floats (in centimeters).
76
+
77
+ ### Data Splits
78
+ 80/20 as shown in the train.txt and val.txt.
79
+
80
+ ## Dataset Creation
81
+
82
+ ### Curation Rationale
83
+ Created to provide dataset for dense plant disease detection for a robotics platform with corresponding camera sensors and strobe lighting.
84
+
85
+ ### Source Data
86
+
87
+ #### Initial Data Collection and Normalization
88
+ We used PlantVillage Dataset with further processing to align the healthy leaf colors with the purchased assets. We collected 750GB of original data where we compressed the depth images from 32-bit to 16-bit and RGB to 95\% quality for ~160GB.
89
+
90
+ #### Who are the source language producers?
91
+ See PlantVillage Datasets for tomato diseases. The tomato plants were purchased through SketchFab with modifications for extra green tomatoes and denser leaves.
92
+
93
+ ### Annotations
94
+ #### Annotation process
95
+ Annotations automatically generated through the simulation.
96
+
97
+ #### Who are the annotators?
98
+ Same as data creators with tomato leaf diseases the same as PlantVillage creators.
99
+
100
+ ### Personal and Sensitive Information
101
+ [More Information Needed]
102
+
103
+ ## Considerations for Using the Data
104
+
105
+ ### Social Impact of Dataset
106
+
107
+ [More Information Needed]
108
+
109
+ ### Discussion of Biases
110
+
111
+ [More Information Needed]
112
+
113
+ ### Other Known Limitations
114
+
115
+ [More Information Needed]
116
+
117
+ ## Additional Information
118
+
119
+ ### Dataset Curators
120
+ [More Information Needed]
121
+
122
+ ### Licensing Information
123
+ CC BY-NC-SA-4.0
124
+
125
+ ### Citation Information
126
+ [More Information Needed]
127
+
128
+ ### Contributions
129
+ [More Information Needed]
example_load.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ import matplotlib.pyplot as plt
3
+ import numpy as np
4
+ from scipy import stats
5
+ # similar to cityscapes for mmsegmentation
6
+ # class name, (new_id, img_id)
7
+ semantic_map = {
8
+ "bacterial_spot": (0, 5),
9
+ "early_blight": (1, 10),
10
+ "late_blight": (2, 20),
11
+ "leaf_mold": (3, 25),
12
+ "septoria_leaf_spot": (4,30),
13
+ "spider_mites": (5,35),
14
+ "target_spot": (6,40),
15
+ "mosaic_virus": (7,45),
16
+ "yellow_leaf_curl_virus":(8,50),
17
+ "healthy_leaf_pv": (9, 15), # plant village healthy leaf
18
+ "healthy_leaf_t": (9, 255), # texture leaf (healthy)
19
+ "background": (10, 0),
20
+ "tomato": (11, 121),
21
+ "stem": (12, 111),
22
+ "wood_rod": (13, 101),
23
+ "red_band": (14, 140),
24
+ "yellow_flower": (15, 131)
25
+ }
26
+
27
+ def maj_vote(img,x,y,n=3):
28
+ half = n // 2
29
+ x_min, x_max = max(0, x - half), min(img.shape[1], x + half + 1)
30
+ y_min, y_max = max(0, y - half), min(img.shape[0], y + half + 1)
31
+
32
+ window = img[y_min:y_max, x_min:x_max].flatten()
33
+ window = window[window != 255]
34
+
35
+ if len(window) > 0:
36
+ # Perform majority voting
37
+ most_common_label = stats.mode(window, keepdims=True)[0][0]
38
+ return most_common_label
39
+ else:
40
+ return semantic_map["background"][0]
41
+
42
+ def color_to_id(img_semantic, top_k_disease = 10, semantic_map = semantic_map):
43
+ semantic_id_img = np.ones(img_semantic.shape) * 255
44
+ disease_counts = []
45
+ # remap rendered color to semantic id
46
+ for _, id_value_map in semantic_map.items():
47
+ # track disease pixel counts for top_k_disease filtering
48
+ if id_value_map[1] < 60 and id_value_map[1] > 1:
49
+ disease_counts.append(np.sum(np.where(img_semantic == id_value_map[1], 1, 0)))
50
+ semantic_id_img[img_semantic == id_value_map[1]] = id_value_map[0]
51
+ # filter for most common disease labels
52
+ for i, item_i in enumerate(np.argsort(disease_counts)[::-1]):
53
+ if i >= top_k_disease:
54
+ id_value_map = list(semantic_map.items())[item_i][1]
55
+ semantic_id_img[img_semantic == id_value_map[1]] = 255
56
+
57
+ # Apply majority voting for unlabeled pixels (needed as the rendering process can blend pixels)
58
+ unknown_mask = (semantic_id_img == 255)
59
+ for y,x in np.argwhere(unknown_mask):
60
+ semantic_id_img[y, x] = maj_vote(semantic_id_img, x, y, 3)
61
+ return semantic_id_img
62
+
63
+
64
+ dataset = load_dataset("xingjianli/tomatotest", 'sample',trust_remote_code=True, num_proc=4)
65
+ print(dataset["train"][0])
66
+
67
+
68
+ left_rgb_img = dataset["train"][0]['left_rgb']
69
+ right_rgb_img = dataset["train"][0]['right_rgb']
70
+ left_semantic_img = np.asarray(dataset["train"][0]['left_semantic'])
71
+ left_instance_img = np.asarray(dataset["train"][0]['left_instance'])
72
+ left_depth_img = np.asarray(dataset["train"][0]['left_depth'])
73
+ right_depth_img = np.asarray(dataset["train"][0]['right_depth'])
74
+ plt.subplot(231)
75
+ plt.imshow(left_rgb_img)
76
+ plt.subplot(232)
77
+ plt.imshow(right_rgb_img)
78
+ plt.subplot(233)
79
+ plt.imshow(color_to_id(left_semantic_img))
80
+ plt.subplot(234)
81
+ plt.imshow(np.where(left_depth_img>500,0,left_depth_img))
82
+ plt.subplot(235)
83
+ plt.imshow(np.where(right_depth_img>500,0,right_depth_img))
84
+ plt.subplot(236)
85
+ plt.imshow(left_instance_img)
86
+ plt.show()
87
+
88
+