jadechoghari
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
pipeline_tag: image-to-3d
|
4 |
---
|
5 |
# [ECCV 2024] VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models
|
@@ -26,9 +26,40 @@ Getting started with VFusion3D is super easy! 🤗 Here’s how you can use the
|
|
26 |
|
27 |
### Load model directly
|
28 |
```python
|
29 |
-
|
|
|
30 |
|
|
|
31 |
model = AutoModel.from_pretrained("jadechoghari/vfusion3d", trust_remote_code=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
```
|
33 |
|
34 |
Check out our [demo app](https://huggingface.co/spaces/jadechoghari/vfusion3d-app) to see VFusion3D in action! 🤗
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-2.0
|
3 |
pipeline_tag: image-to-3d
|
4 |
---
|
5 |
# [ECCV 2024] VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models
|
|
|
26 |
|
27 |
### Load model directly
|
28 |
```python
|
29 |
+
import torch
|
30 |
+
from transformers import AutoModel, AutoProcessor
|
31 |
|
32 |
+
# load the model and processor
|
33 |
model = AutoModel.from_pretrained("jadechoghari/vfusion3d", trust_remote_code=True)
|
34 |
+
processor = AutoProcessor.from_pretrained("jadechoghari/vfusion3d")
|
35 |
+
|
36 |
+
# download and preprocess the image
|
37 |
+
import requests
|
38 |
+
from PIL import Image
|
39 |
+
from io import BytesIO
|
40 |
+
|
41 |
+
image_url = 'https://sm.ign.com/ign_nordic/cover/a/avatar-gen/avatar-generations_prsz.jpg'
|
42 |
+
response = requests.get(image_url)
|
43 |
+
image = Image.open(BytesIO(response.content))
|
44 |
+
|
45 |
+
# rreprocess the image and get the source camera
|
46 |
+
image, source_camera = processor(image)
|
47 |
+
|
48 |
+
|
49 |
+
# generate planes (default output)
|
50 |
+
output_planes = model(image, source_camera)
|
51 |
+
print("Planes shape:", output_planes.shape)
|
52 |
+
|
53 |
+
# generate a 3D mesh
|
54 |
+
output_planes, mesh_path = model(image, source_camera, export_mesh=True)
|
55 |
+
print("Planes shape:", output_planes.shape)
|
56 |
+
print("Mesh saved at:", mesh_path)
|
57 |
+
|
58 |
+
# Generate a video
|
59 |
+
output_planes, video_path = model(image, source_camera, export_video=True)
|
60 |
+
print("Planes shape:", output_planes.shape)
|
61 |
+
print("Video saved at:", video_path)
|
62 |
+
|
63 |
```
|
64 |
|
65 |
Check out our [demo app](https://huggingface.co/spaces/jadechoghari/vfusion3d-app) to see VFusion3D in action! 🤗
|