Xenova HF staff commited on
Commit
1edb02d
·
verified ·
1 Parent(s): 3d4d2e6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ pipeline_tag: object-detection
4
+ license: agpl-3.0
5
+ ---
6
+
7
+ # YOLOv10: Real-Time End-to-End Object Detection
8
+
9
+ ONNX weights for https://github.com/THU-MIG/yolov10.
10
+
11
+ Latency-accuracy trade-offs | Size-accuracy trade-offs
12
+ :-------------------------:|:-------------------------:
13
+ ![latency-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/cXru_kY_pRt4n4mHERnFp.png) | ![size-accuracy trade-offs](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/8apBp9fEZW2gHVdwBN-nC.png)
14
+
15
+ ## Usage (Transformers.js)
16
+
17
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
18
+ ```bash
19
+ npm i @xenova/transformers
20
+ ```
21
+
22
+ **Example:** Perform object-detection.
23
+ ```js
24
+ import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
25
+
26
+ // Load model
27
+ const model = await AutoModel.from_pretrained('onnx-community/yolov10l', {
28
+ // quantized: false, // (Optional) Use unquantized version.
29
+ })
30
+
31
+ // Load processor
32
+ const processor = await AutoProcessor.from_pretrained('onnx-community/yolov10l');
33
+
34
+ // Read image and run processor
35
+ const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/city-streets.jpg';
36
+ const image = await RawImage.read(url);
37
+ const { pixel_values } = await processor(image);
38
+
39
+ // Run object detection
40
+ const { output0 } = await model({ images: pixel_values });
41
+ const predictions = output0.tolist()[0];
42
+ const threshold = 0.5;
43
+ for (const [xmin, ymin, xmax, ymax, score, id] of predictions) {
44
+ if (score < threshold) continue;
45
+ const bbox = [xmin, ymin, xmax, ymax].map(x => x.toFixed(2)).join(', ')
46
+ console.log(`Found "${model.config.id2label[id]}" at [${bbox}] with score ${score.toFixed(2)}.`)
47
+ }
48
+ Found "person" at [473.05, 430.35, 533.53, 532.43] with score 0.92.
49
+ Found "car" at [447.48, 378.60, 639.69, 478.38] with score 0.92.
50
+ Found "person" at [549.94, 260.96, 591.81, 331.22] with score 0.91.
51
+ Found "person" at [33.50, 469.62, 78.99, 571.88] with score 0.90.
52
+ Found "car" at [177.90, 337.14, 399.34, 418.01] with score 0.90.
53
+ Found "traffic light" at [208.80, 55.90, 233.13, 101.39] with score 0.90.
54
+ Found "bicycle" at [449.02, 477.23, 555.98, 537.56] with score 0.89.
55
+ Found "bicycle" at [352.45, 527.27, 463.67, 588.07] with score 0.89.
56
+ // ...
57
+ ```