Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,159 @@
|
|
1 |
-
---
|
2 |
-
license: agpl-3.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: agpl-3.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# YOLOv8 models for deadwood segmentation from RGB UAV imagery
|
6 |
+
|
7 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
+
|
9 |
+
|
10 |
+
## Model Details
|
11 |
+
|
12 |
+
### Model Description
|
13 |
+
|
14 |
+
<!-- Provide a longer summary of what this model is. -->
|
15 |
+
|
16 |
+
|
17 |
+
|
18 |
+
- **Model type:** Instance segmentation
|
19 |
+
- **License:** aGPL3
|
20 |
+
- **Finetuned from model:** Ultralytics pretrained yolov8-seg models
|
21 |
+
|
22 |
+
### Model Sources [optional]
|
23 |
+
|
24 |
+
<!-- Provide the basic links for the model. -->
|
25 |
+
|
26 |
+
- **Repository:** [https://github.com/mayrajeo/yolov8-deadwood](https://github.com/mayrajeo/yolov8-deadwood)
|
27 |
+
- **Paper:** Added after submission
|
28 |
+
- **Demo:** [https://huggingface.co/spaces/mayrajeo/yolov8-deadwood](https://huggingface.co/spaces/mayrajeo/yolov8-deadwood)
|
29 |
+
|
30 |
+
## Uses
|
31 |
+
|
32 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
33 |
+
|
34 |
+
### Direct Use
|
35 |
+
|
36 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
37 |
+
|
38 |
+
Models are meant for detecting and segmenting fallen and standing deadwood from RGB UAV images. As the models are trained on 640x640 pixel orthoimages with around 5 cm spatial resolution, they most likely work best with them.
|
39 |
+
|
40 |
+
Models can be directly used with `ultralytics` library like `model = YOLO(<model_weights.pt>)`, and [https://github.com/mayrajeo/yolov8-deadwood](https://github.com/mayrajeo/yolov8-deadwood) contains example scripts on how to use the models with larger orthomosaics.
|
41 |
+
|
42 |
+
### Out-of-Scope Use
|
43 |
+
|
44 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
45 |
+
|
46 |
+
There are some things to keep in mind when using these models:
|
47 |
+
|
48 |
+
* Models are trained using imagery from two geographically different locations, but both of the study sites consist of dense boreal forests in Finland.
|
49 |
+
* The imagery was collected during leaf-on season, so the models will not produce optimal results during other seasons
|
50 |
+
|
51 |
+
## How to Get Started with the Model
|
52 |
+
|
53 |
+
Single 640x640 pixel image chips can be processed with
|
54 |
+
|
55 |
+
```python
|
56 |
+
from ultralytics import YOLO
|
57 |
+
|
58 |
+
model = YOLO(<path_to_model>)
|
59 |
+
|
60 |
+
res = model(<path_to_image>)
|
61 |
+
```
|
62 |
+
|
63 |
+
Larger orthomosaics should be processed with `sahi` library, or using the `predict_image.py` script from the related GitHub repository.
|
64 |
+
|
65 |
+
|
66 |
+
## Training Details
|
67 |
+
|
68 |
+
### Training Data
|
69 |
+
|
70 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
71 |
+
|
72 |
+
The models were trained on manually annotated deadwood polygon data. From Hiidenportti study area, 33 rectangular scenes were extracted and all visible deadwood was annotated from them. Same process was done fror Sudenpesänkangas, where 71 100x100 meter scenes were extracted.
|
73 |
+
|
74 |
+
In total, the dataset contained 13,813 deadwood instances, of which 2,502 were standing deadwood canopies and 11,311 were fallen deadwood trunks. Hiidenportti dataset contained 1,083 standing and 7,396 fallen annotations, whereas Sudenpesänkangas contained 1,419 standing and 3,915 fallen annotations.
|
75 |
+
|
76 |
+
As using the full sized scenes for training the models would be unfeasible due to their large sizes, the images were split into 640x640 pixel image chips without overlap, and the polygon annotations were converted to YOLO annotation format. After this process, the HP dataset contained 632 image chips for training, 142 for validating and 211 for testing, and SPK dataset contained 688, 224 and 224 chips for training, validating and testing respectively.
|
77 |
+
|
78 |
+
There are three types of models: models with `_hp` suffix are trained only on Hiidenportti data, models with `_spk` suffix only on Sudenpesänkangas data and models with `_both` suffix are trained on data from both sites.
|
79 |
+
|
80 |
+
### Training Procedure
|
81 |
+
|
82 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
83 |
+
|
84 |
+
All models were trained on a single V100 GPGPU with 32GB of RAM on Puhti supercomputer hosted by CSC -- IT Center for Science, Finland.
|
85 |
+
|
86 |
+
Each model was trained for a maximum of 30 epochs with early stopping tolerance of 50 epohcs using Adam optimizer with initial learning rate of 0.001. Batch sizes for the models were chosen to be as large as possible so that they consumed a maximum of 60 % of the available GPU memory. Automatic mixed precision was used during training.
|
87 |
+
|
88 |
+
## Evaluation
|
89 |
+
|
90 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
91 |
+
|
92 |
+
### Testing Data, Factors & Metrics
|
93 |
+
|
94 |
+
#### Testing Data
|
95 |
+
|
96 |
+
<!-- This should link to a Dataset Card if possible. -->
|
97 |
+
|
98 |
+
Models were evaluated based on the test splits of both study sites.
|
99 |
+
|
100 |
+
#### Metrics
|
101 |
+
|
102 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
103 |
+
|
104 |
+
We used standard instance segmentation metrics, with the implementations from `ultralytics` library.
|
105 |
+
|
106 |
+
### Results
|
107 |
+
|
108 |
+
Results for Hiidenportti test data
|
109 |
+
|
110 |
+
| | precision(M) Total | precision(M) Fallen | precision(M) Standing | recall(M) Total | recall(M) Fallen | recall(M) Standing | mAP50(M) Total | mAP50(M) Fallen | mAP50(M) Standing | mAP50-95(M) Total | mAP50-95(M) Fallen | mAP50-95(M) Standing |
|
111 |
+
|:--------------------|--------------------------:|---------------------------------:|----------------------------------:|-----------------------:|------------------------------:|-------------------------------:|----------------------:|-----------------------------:|------------------------------:|-------------------------:|--------------------------------:|---------------------------------:|
|
112 |
+
| yolov8n_hp | 0.591 | 0.624 | 0.557 | 0.575 | 0.571 | 0.579 | 0.600 | 0.602 | 0.598 | 0.294 | 0.273 | 0.315 |
|
113 |
+
| yolov8n_spk | 0.512 | 0.560 | 0.463 | 0.469 | 0.485 | 0.454 | 0.464 | 0.495 | 0.433 | 0.198 | 0.194 | 0.202 |
|
114 |
+
| yolov8n_both | 0.720 | 0.741 | 0.699 | 0.571 | 0.534 | 0.607 | 0.647 | 0.612 | 0.683 | 0.317 | 0.263 | 0.371 |
|
115 |
+
| yolov8s_hp | 0.688 | 0.679 | 0.697 | 0.581 | 0.563 | 0.599 | 0.643 | 0.613 | 0.672 | 0.325 | 0.280 | 0.370 |
|
116 |
+
| yolov8s_spk | 0.548 | 0.669 | 0.428 | 0.478 | 0.463 | 0.492 | 0.484 | 0.528 | 0.439 | 0.212 | 0.213 | 0.211 |
|
117 |
+
| yolov8s_both | 0.650 | 0.623 | 0.678 | 0.614 | 0.644 | 0.584 | 0.656 | 0.638 | 0.675 | 0.324 | 0.284 | 0.364 |
|
118 |
+
| yolov8m_hp | 0.683 | 0.678 | 0.688 | 0.572 | 0.570 | 0.574 | 0.638 | 0.607 | 0.669 | 0.306 | 0.256 | 0.356 |
|
119 |
+
| yolov8m_spk | 0.609 | 0.702 | 0.516 | 0.563 | 0.539 | 0.587 | 0.551 | 0.591 | 0.512 | 0.256 | 0.254 | 0.258 |
|
120 |
+
| yolov8m_both | 0.676 | 0.643 | 0.710 | 0.619 | 0.637 | 0.602 | 0.671 | 0.638 | 0.703 | 0.338 | 0.286 | 0.390 |
|
121 |
+
| yolov8l_hp | 0.673 | 0.642 | 0.704 | 0.572 | 0.611 | 0.533 | 0.624 | 0.599 | 0.648 | 0.302 | 0.256 | 0.348 |
|
122 |
+
| yolov8l_spk | 0.609 | 0.700 | 0.518 | 0.530 | 0.524 | 0.536 | 0.544 | 0.585 | 0.504 | 0.254 | 0.254 | 0.253 |
|
123 |
+
| yolov8l_both | 0.701 | 0.658 | 0.744 | 0.622 | 0.627 | 0.616 | 0.676 | 0.648 | 0.705 | 0.339 | 0.291 | 0.386 |
|
124 |
+
| yolov8x_hp | 0.656 | 0.607 | 0.705 | 0.600 | 0.614 | 0.587 | 0.635 | 0.630 | 0.640 | 0.317 | 0.285 | 0.350 |
|
125 |
+
| yolov8x_spk | 0.550 | 0.706 | 0.395 | 0.493 | 0.460 | 0.526 | 0.469 | 0.548 | 0.390 | 0.211 | 0.234 | 0.188 |
|
126 |
+
| yolov8x_both | 0.709 | 0.684 | 0.734 | 0.620 | 0.603 | 0.638 | 0.682 | 0.654 | 0.709 | 0.353 | 0.306 | 0.400 |
|
127 |
+
|
128 |
+
Resuls for Sudenpesänkangas test data
|
129 |
+
|
130 |
+
| | precision(M) Total | precision(M) Fallen | precision(M) Standing | recall(M) Total | recall(M) Fallen | recall(M) Standing | mAP50(M) Total | mAP50(M) Fallen | mAP50(M) Standing | mAP50-95(M) Total | mAP50-95(M) Fallen | mAP50-95(M) Standing |
|
131 |
+
|:--------------------|--------------------------:|---------------------------------:|----------------------------------:|-----------------------:|------------------------------:|-------------------------------:|----------------------:|-----------------------------:|------------------------------:|-------------------------:|--------------------------------:|---------------------------------:|
|
132 |
+
| yolov8n_hp | 0.683 | 0.492 | 0.873 | 0.233 | 0.249 | 0.218 | 0.308 | 0.288 | 0.329 | 0.138 | 0.106 | 0.170 |
|
133 |
+
| yolov8n_spk | 0.721 | 0.615 | 0.826 | 0.519 | 0.491 | 0.547 | 0.591 | 0.508 | 0.673 | 0.292 | 0.197 | 0.388 |
|
134 |
+
| yolov8n_both | 0.730 | 0.682 | 0.778 | 0.527 | 0.444 | 0.611 | 0.604 | 0.504 | 0.705 | 0.305 | 0.198 | 0.413 |
|
135 |
+
| yolov8s_hp | 0.586 | 0.446 | 0.726 | 0.342 | 0.347 | 0.336 | 0.414 | 0.331 | 0.497 | 0.187 | 0.121 | 0.253 |
|
136 |
+
| yolov8s_spk | 0.670 | 0.634 | 0.706 | 0.609 | 0.517 | 0.702 | 0.638 | 0.537 | 0.739 | 0.310 | 0.206 | 0.413 |
|
137 |
+
| yolov8s_both | 0.672 | 0.617 | 0.727 | 0.577 | 0.508 | 0.646 | 0.617 | 0.526 | 0.709 | 0.309 | 0.209 | 0.410 |
|
138 |
+
| yolov8m_hp | 0.613 | 0.440 | 0.786 | 0.339 | 0.330 | 0.349 | 0.407 | 0.331 | 0.482 | 0.185 | 0.122 | 0.248 |
|
139 |
+
| yolov8m_spk | 0.720 | 0.604 | 0.835 | 0.556 | 0.529 | 0.583 | 0.635 | 0.525 | 0.744 | 0.317 | 0.215 | 0.420 |
|
140 |
+
| yolov8m_both | 0.716 | 0.639 | 0.792 | 0.581 | 0.515 | 0.647 | 0.646 | 0.535 | 0.757 | 0.336 | 0.225 | 0.447 |
|
141 |
+
| yolov8l_hp | 0.573 | 0.414 | 0.732 | 0.340 | 0.366 | 0.313 | 0.397 | 0.328 | 0.465 | 0.162 | 0.113 | 0.212 |
|
142 |
+
| yolov8l_spk | 0.709 | 0.641 | 0.777 | 0.584 | 0.501 | 0.667 | 0.639 | 0.530 | 0.748 | 0.332 | 0.223 | 0.442 |
|
143 |
+
| yolov8l_both | 0.750 | 0.678 | 0.822 | 0.572 | 0.520 | 0.623 | 0.656 | 0.559 | 0.753 | 0.341 | 0.240 | 0.441 |
|
144 |
+
| yolov8x_hp | 0.675 | 0.543 | 0.807 | 0.322 | 0.362 | 0.282 | 0.421 | 0.385 | 0.457 | 0.185 | 0.141 | 0.229 |
|
145 |
+
| yolov8x_spk | 0.680 | 0.669 | 0.691 | 0.597 | 0.483 | 0.711 | 0.624 | 0.516 | 0.731 | 0.308 | 0.202 | 0.415 |
|
146 |
+
| yolov8x_both | 0.711 | 0.663 | 0.760 | 0.611 | 0.554 | 0.667 | 0.651 | 0.556 | 0.746 | 0.333 | 0.234 | 0.432 |
|
147 |
+
|
148 |
+
## Citation [optional]
|
149 |
+
|
150 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
151 |
+
|
152 |
+
**BibTeX:**
|
153 |
+
|
154 |
+
Added after submitting
|
155 |
+
|
156 |
+
|
157 |
+
## Model Card Contact
|
158 |
+
|
159 |
+
Janne Mäyrä, `@mayrajeo` on GitHub, Hugging Face and many other services.
|