Spaces:
Runtime error
Runtime error
Upload app.py
Browse files
app.py
CHANGED
@@ -18,9 +18,24 @@ subprocess.run(shlex.split('pip install wheel/torchmcubes-0.1.0-cp310-cp310-linu
|
|
18 |
from tsr.system import TSR
|
19 |
from tsr.utils import remove_background, resize_foreground, to_gradio_3d_orientation
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
STEP1_HEADER = """
|
23 |
-
|
24 |
|
25 |
For this step, we use TripoSR, an open-source model for **fast** feedforward 3D reconstruction from a single image, developed in collaboration between [Tripo AI](https://www.tripo3d.ai/) and [Stability AI](https://stability.ai/).
|
26 |
|
@@ -33,9 +48,45 @@ During this step, you need to upload an image of what you want to generate a 3D
|
|
33 |
|
34 |
- If you find the result is unsatisfied, please try to change the foreground ratio. It might improve the results.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
"""
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
# These part of the code (check_input_image and preprocess were taken from https://huggingface.co/spaces/stabilityai/TripoSR/blob/main/app.py)
|
40 |
if torch.cuda.is_available():
|
41 |
device = "cuda:0"
|
@@ -94,6 +145,7 @@ def generate(image, mc_resolution, formats=["obj", "glb"]):
|
|
94 |
|
95 |
|
96 |
with gr.Blocks() as demo:
|
|
|
97 |
gr.Markdown(STEP1_HEADER)
|
98 |
with gr.Row(variant = "panel"):
|
99 |
with gr.Column():
|
@@ -150,6 +202,9 @@ with gr.Blocks() as demo:
|
|
150 |
inputs=[processed_image, mc_resolution],
|
151 |
outputs=[output_model_obj, output_model_glb],
|
152 |
)
|
|
|
|
|
|
|
153 |
|
154 |
demo.queue(max_size=10)
|
155 |
demo.launch()
|
|
|
18 |
from tsr.system import TSR
|
19 |
from tsr.utils import remove_background, resize_foreground, to_gradio_3d_orientation
|
20 |
|
21 |
+
HEADER = """
|
22 |
+
# Generate 3D Assets for Roblox
|
23 |
+
|
24 |
+
With this Space, you can generate 3D Assets using AI for your Roblox game for free.
|
25 |
+
|
26 |
+
Simply follow the 4 steps below.
|
27 |
+
|
28 |
+
1. Generate a 3D Mesh using an image model as input.
|
29 |
+
2. Simplify the Mesh to get lower polygon number
|
30 |
+
3. (Optional) make the Mesh more smooth
|
31 |
+
4. Get the Material
|
32 |
+
|
33 |
+
We wrote a tutorial here
|
34 |
+
|
35 |
+
"""
|
36 |
|
37 |
STEP1_HEADER = """
|
38 |
+
## Step 1: Generate the 3D Mesh
|
39 |
|
40 |
For this step, we use TripoSR, an open-source model for **fast** feedforward 3D reconstruction from a single image, developed in collaboration between [Tripo AI](https://www.tripo3d.ai/) and [Stability AI](https://stability.ai/).
|
41 |
|
|
|
48 |
|
49 |
- If you find the result is unsatisfied, please try to change the foreground ratio. It might improve the results.
|
50 |
|
51 |
+
- To know more about what is the Marching Cubes Resolution check this : https://huggingface.co/learn/ml-for-3d-course/en/unit4/marching-cubes#marching-cubes
|
52 |
+
|
53 |
+
"""
|
54 |
+
|
55 |
+
STEP2_HEADER = """
|
56 |
+
## Step 2: Simplify the generated 3D Mesh
|
57 |
+
|
58 |
+
ADD ILLUSTRATION
|
59 |
+
|
60 |
+
The 3D Mesh Generated contains too much polygons, fortunately, we can use another AI model to help us optimize it.
|
61 |
+
|
62 |
+
The model we use is called [MeshAnythingV2]().
|
63 |
+
|
64 |
+
|
65 |
+
## π‘ Tips
|
66 |
+
|
67 |
+
- We don't click on Preprocess with marching Cubes, because in the last step the input mesh was produced by it.
|
68 |
+
|
69 |
+
- Limited by computational resources, MeshAnything is trained on meshes with fewer than 1600 faces and cannot generate meshes with more than 1600 faces. The shape of the input mesh should be sharp enough; otherwise, it will be challenging to represent it with only 1600 faces. Thus, feed-forward image-to-3D methods may often produce bad results due to insufficient shape quality.
|
70 |
|
71 |
"""
|
72 |
|
73 |
+
STEP3_HEADER = """
|
74 |
+
## Step 3 (optional): Shader Smooth
|
75 |
+
|
76 |
+
- The mesh simplified in step 2, looks low poly. One way to make it more smooth is to use Shader Smooth.
|
77 |
+
- You can usually do it in Blender, but we can do it directly here
|
78 |
+
|
79 |
+
ADD ILLUSTRATION
|
80 |
+
|
81 |
+
ADD SHADERSMOOTH
|
82 |
+
"""
|
83 |
+
|
84 |
+
STEP4_HEADER = """
|
85 |
+
## Step 4: Get the Mesh Material
|
86 |
+
|
87 |
+
"""
|
88 |
+
|
89 |
+
|
90 |
# These part of the code (check_input_image and preprocess were taken from https://huggingface.co/spaces/stabilityai/TripoSR/blob/main/app.py)
|
91 |
if torch.cuda.is_available():
|
92 |
device = "cuda:0"
|
|
|
145 |
|
146 |
|
147 |
with gr.Blocks() as demo:
|
148 |
+
gr.Markdown(HEADER)
|
149 |
gr.Markdown(STEP1_HEADER)
|
150 |
with gr.Row(variant = "panel"):
|
151 |
with gr.Column():
|
|
|
202 |
inputs=[processed_image, mc_resolution],
|
203 |
outputs=[output_model_obj, output_model_glb],
|
204 |
)
|
205 |
+
gr.Markdown(STEP2_HEADER)
|
206 |
+
gr.Markdown(STEP3_HEADER)
|
207 |
+
gr.Markdown(STEP4_HEADER)
|
208 |
|
209 |
demo.queue(max_size=10)
|
210 |
demo.launch()
|