Xenova HF staff commited on
Commit
70ab886
1 Parent(s): b8f0ddf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -1
README.md CHANGED
@@ -2,4 +2,61 @@
2
  license: apple-ascl
3
  base_model:
4
  - apple/DepthPro
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apple-ascl
3
  base_model:
4
  - apple/DepthPro
5
+ ---
6
+
7
+ https://huggingface.co/apple/DepthPro with ONNX weights to be compatible with Transformers.js.
8
+
9
+ ## Usage (Transformers.js)
10
+
11
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
12
+ ```bash
13
+ npm i @huggingface/transformers
14
+ ```
15
+
16
+ ```js
17
+ import { AutoProcessor, AutoModelForDepthEstimation, RawImage } from "@huggingface/transformers";
18
+
19
+ // Load model and processor
20
+ const depth = await AutoModelForDepthEstimation.from_pretrained("onnx-community/DepthPro-ONNX", { dtype: "q4" });
21
+ const processor = await AutoProcessor.from_pretrained("onnx-community/DepthPro-ONNX");
22
+
23
+ // Read and prepare image
24
+ const image = await RawImage.read("https://raw.githubusercontent.com/huggingface/transformers.js-examples/main/depth-pro-node/assets/image.jpg");
25
+ const inputs = await processor(image);
26
+
27
+ // Run depth estimation model
28
+ const { predicted_depth, focallength_px } = await depth(inputs);
29
+
30
+ // Normalize the depth map to [0, 1]
31
+ const depth_map_data = predicted_depth.data;
32
+ let minDepth = Infinity;
33
+ let maxDepth = -Infinity;
34
+ for (let i = 0; i < depth_map_data.length; ++i) {
35
+ minDepth = Math.min(minDepth, depth_map_data[i]);
36
+ maxDepth = Math.max(maxDepth, depth_map_data[i]);
37
+ }
38
+ const depth_tensor = predicted_depth
39
+ .sub_(minDepth)
40
+ .div_(-(maxDepth - minDepth)) // Flip for visualization purposes
41
+ .add_(1)
42
+ .clamp_(0, 1)
43
+ .mul_(255)
44
+ .round_()
45
+ .to("uint8");
46
+
47
+ // Save the depth map
48
+ const depth_image = RawImage.fromTensor(depth_tensor);
49
+ depth_image.save("depth.png");
50
+ ```
51
+
52
+
53
+ The following images illustrate the input image and its corresponding depth map generated by the model:
54
+
55
+
56
+ | Input Image | Depth Map |
57
+ | ---------------------------------- | -------------------------------- |
58
+ | ![Input Image](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/LDQhhHYS1CXAXw65VflRi.jpeg) | ![Depth Map](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/uYLRu3P1eUOVJoWTrLLtP.png) |
59
+
60
+ ---
61
+
62
+ Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).