hheat commited on
Commit
81b3949
β€’
1 Parent(s): 1dee3e2

add model ckpt

Browse files
Files changed (2) hide show
  1. README.md +4 -88
  2. best.ckpt +3 -0
README.md CHANGED
@@ -40,109 +40,25 @@ We present a flexible end-to-end feed-forward framework, named the *LucidFusion*
40
 
41
  ## πŸ”§ Training Instructions
42
 
43
- Our code is now released!
44
 
45
- ### Install
46
- ```
47
- conda create -n LucidFusion python=3.9.19
48
- conda activate LucidFusion
49
-
50
- # For example, we use torch 2.3.1 + cuda 11.8, and tested with latest torch (2.4.1) which works with the latest xformers (0.0.28).
51
- pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118
52
-
53
- # Xformers is required! please refer to https://github.com/facebookresearch/xformers for details.
54
- # [linux only] cuda 11.8 version
55
- pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
56
 
57
- # For 3D Gaussian Splatting, we use LGM modified version, details please refer to https://github.com/3DTopia/LGM
58
- git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
59
- pip install ./diff-gaussian-rasterization
60
-
61
- # Other dependencies
62
- pip install -r requirements.txt
63
- ```
64
 
65
  ### Pretrained Weights
66
 
67
- Our pre-trained weights will be released soon, please check back!
68
-
69
  Our current model loads pre-trained diffusion model for config. We use stable-diffusion-2-1-base, to download it, simply run
70
  ```
71
  python pretrained/download.py
72
  ```
73
  You can omit this step if you already have stable-diffusion-2-1-base, and simply update "model_key" with your local SD-2-1 path for scripts in scripts/ folder.
74
 
75
- ## πŸ”₯ Inference
76
- A shell script is provided with example files.
77
- To run, you first need to set up the pre-trained weights as follows:
78
-
79
- ```
80
- cd LucidFusion
81
- mkdir output/demo
82
-
83
- # Download the pretrained weights and name it as best.ckpt
84
-
85
- # Place the pretrained weights in LucidFusion/output/demo/best.ckpt
86
- ```
87
- We have also provided some preprocessed examples.
88
-
89
- For GSO files, the example objects are "alarm", "chicken", "hat", "lunch_bag", "mario", and "shoe1".
90
-
91
- To run GSO demo:
92
- ```
93
- # You can adjust "DEMO" field inside the gso_demo.sh to load other examples.
94
-
95
- bash scripts/gso_demo.sh
96
- ```
97
-
98
- To run the images demo, masks are obtained using preprocess.py. The example objects are "nutella_new", "monkey_chair", "dog_chair".
99
-
100
- ```
101
- bash scripts/demo.sh
102
- ```
103
-
104
- To run the diffusion demo as a single-image-to-multi-view setup, we use the pixel diffusion trained in the CRM, as described in the paper. You can also use other multi-view diffusion models to generate multi-view outputs from a single image.
105
-
106
- For dependencies issue, please check https://github.com/thu-ml/CRM
107
-
108
- We also provide LGM's imagegen diffusion, simply set --crm=false in diffusion_demo.sh. You can change the --seed with different seed option.
109
-
110
- ```
111
- bash script/diffusion_demo.sh
112
- ```
113
-
114
-
115
- You can also try your own example! To do that:
116
-
117
- 1. Obtain images and place them in the examples folder:
118
- ```
119
- LucidFusion
120
- β”œβ”€β”€ examples/
121
- | β”œβ”€β”€ "your obj name"/
122
- | | β”œβ”€β”€ "image_01.png"
123
- | | β”œβ”€β”€ "image_02.png"
124
- | | β”œβ”€β”€ ...
125
- ```
126
- 2. Run preprocess.py to extract the recentered image and its mask:
127
- ```
128
- # Run the following will create two folders (images, masks) in "your-obj-name" folder.
129
- # You can check to see if the extract mask is corrected.
130
- python preprocess.py examples/you-obj-name --outdir examples/your-obj-name
131
- ```
132
-
133
- 3. Modify demo.sh to set DEMO=β€œexamples/your-obj-name”, then run the script:
134
- ```
135
- bash scripts/demo.sh
136
- ```
137
-
138
- ## πŸ€— Gradio Demo
139
-
140
- We are currently building an online demo of LucidFusion with Gradio. It is still under development, and will coming out soon!
141
 
142
  ## 🚧 Todo
143
 
144
  - [x] Release the inference codes
145
- - [ ] Release our weights
146
  - [ ] Release the Gardio Demo
147
  - [ ] Release the Stage 1 and 2 training codes
148
 
 
40
 
41
  ## πŸ”§ Training Instructions
42
 
43
+ Our inference code is now released!
44
 
45
+ Please refer to our [repo](https://github.com/EnVision-Research/LucidFusion/tree/master) for more details.
 
 
 
 
 
 
 
 
 
 
46
 
 
 
 
 
 
 
 
47
 
48
  ### Pretrained Weights
49
 
 
 
50
  Our current model loads pre-trained diffusion model for config. We use stable-diffusion-2-1-base, to download it, simply run
51
  ```
52
  python pretrained/download.py
53
  ```
54
  You can omit this step if you already have stable-diffusion-2-1-base, and simply update "model_key" with your local SD-2-1 path for scripts in scripts/ folder.
55
 
56
+ Our pre-trained weights is released!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ## 🚧 Todo
59
 
60
  - [x] Release the inference codes
61
+ - [x] Release our weights
62
  - [ ] Release the Gardio Demo
63
  - [ ] Release the Stage 1 and 2 training codes
64
 
best.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f20c0eba48b1130311b3a82373966a78b75c363b6be77922bca3c6576a4e700
3
+ size 11793105544