File size: 3,282 Bytes
18e0ada a6244b8 18e0ada 5421aab 18e0ada a6244b8 79a8871 a6244b8 bc5b711 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
license: gpl
pipeline_tag: object-detection
tags:
- ultralytics
- yolo
- yolov10
- object-detection
---
# Rock Paper Scissors Object Detection Model
**Created by FRC Team 578**
## Description
This YOLO v10 small model was trained for educational purposes only. It is used to illustrate to students how an object detection model works. It was trained for 10 epochs.
## Training Data
The model trained on 100 images found online. No augmentation of the images were preformed.
## Metrics
| Class | Images | Instances | Box | R | mAP50 | mAP50-95 |
| -------- | ------- | --------- | ------ | ------| ----- | -------- |
| all | 100 | 260 | 0.917 | 0.795 | 0.925 | 0.735 |
| rock | 69 | 84 | 0.875 | 0.835 | 0.924 | 0.728 |
| paper | 56 | 65 | 0.899 | 0.815 | 0.909 | 0.721 |
| scissors | 88 | 111 | 0.976 | 0.736 | 0.943 | 0.755 |
## How to Use
```
pip install ultralytics
pip install huggingface_hub
```
```
from ultralytics import YOLO
from huggingface_hub import hf_hub_download
from matplotlib import pyplot as plt
# Load the weights from our repository
model_path = hf_hub_download(
local_dir=".",
repo_id="fairportrobotics/rock-paper-scissors",
filename="model.pt"
)
model = YOLO(model_path)
# Load a test image
sample_path = hf_hub_download(
local_dir=".",
repo_id="fairportrobotics/rock-paper-scissors",
filename="sample.jpg"
)
# Do the predictions
res = model.predict(
source=sample_path,
project='.',
name='detected',
exist_ok=True,
save=True,
show=False,
show_labels=True,
show_conf=True,
conf=0.5
)
plt.figure(figsize=(15,10))
plt.imshow(plt.imread('detected/sample.jpg'))
plt.show()
```
As you can see the model isn't perfect ;)
### Use the model with your webcam
```
from ultralytics import YOLO
import cv2
import math
from huggingface_hub import hf_hub_download
# start the webcam
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
# Load the weights from our repository
model_path = hf_hub_download(
local_dir=".",
repo_id="fairportrobotics/rock-paper-scissors",
filename="model.pt"
)
model = YOLO(model_path)
# object classes
classNames = ["rock", "paper", "scissors"]
while True:
success, img = cap.read()
results = model(img, stream=True)
# coordinates
for r in results:
boxes = r.boxes
for box in boxes:
# bounding box
x1, y1, x2, y2 = box.xyxy[0]
x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2) # convert to int values
# put box in cam
cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 255), 3)
# confidence
confidence = math.ceil((box.conf[0]*100))/100
# class name
cls = int(box.cls[0])
# object details
org = [x1, y1]
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
color = (255, 0, 0)
thickness = 2
cv2.putText(img, classNames[cls] + " " + str(round(confidence,2)), org, font, fontScale, color, thickness)
cv2.imshow('Webcam', img)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
``` |