File size: 2,919 Bytes
66f9723
71d49e1
79b5216
 
 
 
 
 
445f5a7
79b5216
 
 
6012530
66f9723
79b5216
 
 
 
 
 
155bc64
 
 
79b5216
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30eb2a3
 
79b5216
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e11a9ef
9532f26
 
cd63a9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b3ae74
9532f26
 
e11a9ef
 
fa9c6b7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: agpl-3.0
datasets:
- KaraAgroAI/CADI-AI
language:
- en
metrics:
- mape
library_name: yolo
tags:
- object detection
- vision
pipeline_tag: object-detection
---


## Cashew Disease Identification with AI (CADI-AI) Model

### Model Description

Object detection model trained using [YOLO v5x](https://github.com/ultralytics/yolov5/releases), a SOTA object detection algorithm.  
The model was pre-trained on the Cashew Disease Identification with AI (CADI-AI) train set (3788 images) at a resolution of 640x640 pixels. 
CADI-AI dataset is available in hugging face dataset hub.

## Intended uses & limitations

You can use the raw model for object detection on cashew images.

### How to use

- Load model and perform prediction:

```python
import torch

# load model
model = torch.hub.load('ultralytics/yolov5', 'KaraAgroAI/CADI-AI')

# Images
img = ['/path/to/CADI-AI-image.jpg']# batch of images
  
# set model parameters
# set Non-Maximum-Suppression(NMS) threshold to define
# minimum confidence score that a bounding box must have in order to be kept.
model.conf = 0.20  # NMS confidence threshold

# perform inference
results = model(img, size=640)

# Results
results.print()

results.xyxy[0]  # img1 predictions (tensor)
results.pandas().xyxy[0]  # img1 predictions (pandas)

# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]

# show detection bounding boxes on image
results.show()

# save results into "results/" folder
results.save(save_dir='results/')
```

- Finetune the model on your custom dataset:

```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights KaraAgroAI/CADI-AI --epochs 10
```

### Model performance 

| Class | Precision | Recall | mAP@50 | mAP@50-95 |
| --- | --- | --- | --- | --- |
| all | 0.663 | 0.632 | 0.648 | 0.291 |
| insect | 0.794 | 0.811 | 0.815 | 0.39 |
| abiotic | 0.682 | 0.514 | 0.542 | 0.237 |
| disease | 0.594 | 0.571 | 0.588 | 0.248 |

### Limitations of the Model
The model has a few limitations that affect its performance in distinguishing between the disease class and the abiotic class. 
The primary challenge lies in the similarity between these two classes within a typical farm setting. 
The model may encounter difficulties in accurately differentiating between them due to their overlapping characteristics. 
This limitation is an inherent challenge in the dataset and can impact the model's accuracy when classifying these classes.

However, it is worth noting that the model exhibits strong performance when it comes to the insect class. 
This is attributed to the distinct characteristics of insect class, which make them easier to identify and classify accurately.

### Example prediction

<div align="center">
  <img width="640" alt="KaraAgroAI/CADI-AI" src="https://huggingface.co/KaraAgroAI/CADI-AI/resolve/main/sample.jpg">
</div>