Commit
•
eaa7704
1
Parent(s):
9c48d03
Update README.md (#1)
Browse files- Update README.md (9e0f371a29b4a90a1db92762f9cd5b690f9a26e5)
Co-authored-by: Dylan Ebert <dylanebert@users.noreply.huggingface.co>
README.md
CHANGED
@@ -6,6 +6,102 @@ tags:
|
|
6 |
- pytorch_model_hub_mixin
|
7 |
---
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- pytorch_model_hub_mixin
|
7 |
---
|
8 |
|
9 |
+
|
10 |
+
# MeshAnythingV2
|
11 |
+
|
12 |
+
Library: [https://github.com/buaacyw/MeshAnythingV2](https://github.com/buaacyw/MeshAnythingV2)
|
13 |
+
|
14 |
+
## Contents
|
15 |
+
- [Contents](#contents)
|
16 |
+
- [Installation](#installation)
|
17 |
+
- [Usage](#usage)
|
18 |
+
- [Important Notes](#important-notes)
|
19 |
+
- [Acknowledgement](#acknowledgement)
|
20 |
+
- [BibTeX](#bibtex)
|
21 |
+
|
22 |
+
## Installation
|
23 |
+
Our environment has been tested on Ubuntu 22, CUDA 11.8 with A800.
|
24 |
+
1. Clone our repo and create conda environment
|
25 |
+
```
|
26 |
+
git clone https://github.com/buaacyw/MeshAnythingV2.git && cd MeshAnythingV2
|
27 |
+
conda create -n MeshAnythingV2 python==3.10.13 -y
|
28 |
+
conda activate MeshAnythingV2
|
29 |
+
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
|
30 |
+
pip install -r requirements.txt
|
31 |
+
pip install flash-attn --no-build-isolation
|
32 |
+
pip install -U gradio
|
33 |
+
```
|
34 |
+
|
35 |
+
## Usage
|
36 |
+
|
37 |
+
### For text/image to Artist-Create Mesh. We suggest using [Rodin](https://hyperhuman.deemos.com/rodin) to first achieve text or image to dense mesh. And then input the dense mesh to us.
|
38 |
+
```
|
39 |
+
# Put the output obj file of Rodin to rodin_result and using the following command to generate the Artist-Created Mesh.
|
40 |
+
# We suggest using the --mc flag to preprocess the input mesh with Marching Cubes first. This helps us to align the inference point cloud to our training domain.
|
41 |
+
python main.py --input_dir rodin_result --out_dir mesh_output --input_type mesh --mc
|
42 |
+
```
|
43 |
+
|
44 |
+
### Local Gradio Demo <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a>
|
45 |
+
```
|
46 |
+
python app.py
|
47 |
+
```
|
48 |
+
|
49 |
+
### Mesh Command line inference
|
50 |
+
#### Important Notes: If your mesh input is not produced by Marching Cubes, We suggest you to preprocess the mesh with Marching Cubes first (simply by adding --mc).
|
51 |
+
```
|
52 |
+
# folder input
|
53 |
+
python main.py --input_dir examples --out_dir mesh_output --input_type mesh
|
54 |
+
|
55 |
+
# single file input
|
56 |
+
python main.py --input_path examples/wand.obj --out_dir mesh_output --input_type mesh
|
57 |
+
|
58 |
+
# Preprocess with Marching Cubes first
|
59 |
+
python main.py --input_dir examples --out_dir mesh_output --input_type mesh --mc
|
60 |
+
|
61 |
+
# The mc resolution is default to be 128. For some delicate mesh, this resolution is not sufficient. Raise this resolution takes more time to preprocess but should achieve a better result.
|
62 |
+
# Change it by : --mc_level 7 -> 128 (2^7), --mc_level 8 -> 256 (2^8).
|
63 |
+
# 256 resolution Marching Cube example.
|
64 |
+
python main.py --input_dir examples --out_dir mesh_output --input_type mesh --mc --mc_level 8
|
65 |
+
```
|
66 |
+
|
67 |
+
### Point Cloud Command line inference
|
68 |
+
```
|
69 |
+
# Note: if you want to use your own point cloud, please make sure the normal is included.
|
70 |
+
# The file format should be a .npy file with shape (N, 6), where N is the number of points. The first 3 columns are the coordinates, and the last 3 columns are the normal.
|
71 |
+
|
72 |
+
# inference for folder
|
73 |
+
python main.py --input_dir pc_examples --out_dir pc_output --input_type pc_normal
|
74 |
+
|
75 |
+
# inference for single file
|
76 |
+
python main.py --input_path pc_examples/mouse.npy --out_dir pc_output --input_type pc_normal
|
77 |
+
```
|
78 |
+
|
79 |
+
## Important Notes
|
80 |
+
- It takes about 8GB and 45s to generate a mesh on an A6000 GPU (depending on the face number of the generated mesh).
|
81 |
+
- The input mesh will be normalized to a unit bounding box. The up vector of the input mesh should be +Y for better results.
|
82 |
+
- Limited by computational resources, MeshAnything is trained on meshes with fewer than 1600 faces and cannot generate meshes with more than 1600 faces. The shape of the input mesh should be sharp enough; otherwise, it will be challenging to represent it with only 1600 faces. Thus, feed-forward 3D generation methods may often produce bad results due to insufficient shape quality. We suggest using results from 3D reconstruction, scanning, SDS-based method (like [DreamCraft3D](https://github.com/deepseek-ai/DreamCraft3D)) or [Rodin](https://hyperhuman.deemos.com/rodin) as the input of MeshAnything.
|
83 |
+
- Please refer to https://huggingface.co/spaces/Yiwen-ntu/MeshAnything/tree/main/examples for more examples.
|
84 |
+
|
85 |
+
## Acknowledgement
|
86 |
+
|
87 |
+
Our code is based on these wonderful repos:
|
88 |
+
|
89 |
+
* [MeshAnything](https://github.com/buaacyw/MeshAnything)
|
90 |
+
* [MeshGPT](https://nihalsid.github.io/mesh-gpt/)
|
91 |
+
* [meshgpt-pytorch](https://github.com/lucidrains/meshgpt-pytorch)
|
92 |
+
* [Michelangelo](https://github.com/NeuralCarver/Michelangelo)
|
93 |
+
* [transformers](https://github.com/huggingface/transformers)
|
94 |
+
* [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch)
|
95 |
+
|
96 |
+
## BibTeX
|
97 |
+
```
|
98 |
+
@misc{chen2024meshanythingv2artistcreatedmesh,
|
99 |
+
title={MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization},
|
100 |
+
author={Yiwen Chen and Yikai Wang and Yihao Luo and Zhengyi Wang and Zilong Chen and Jun Zhu and Chi Zhang and Guosheng Lin},
|
101 |
+
year={2024},
|
102 |
+
eprint={2408.02555},
|
103 |
+
archivePrefix={arXiv},
|
104 |
+
primaryClass={cs.CV},
|
105 |
+
url={https://arxiv.org/abs/2408.02555},
|
106 |
+
}
|
107 |
+
```
|