MarcusLoren commited on
Commit
405c047
1 Parent(s): 54ff45b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -3,6 +3,9 @@ license: apache-2.0
3
  pipeline_tag: text-to-3d
4
  ---
5
 
 
 
 
6
 
7
  ### MeshGPT-alpha-preview
8
 
@@ -10,6 +13,8 @@ MeshGPT is a text-to-3D model based on an autoencoder (tokenizer) and a transfor
10
  The autoencoder's purpose is to be able to translate 3D models into tokens which then the decoder part of it can convert back to 3D mesh.<br/>
11
  For all purposes and definitions the autoencoder is the **world first** published **3D model tokenizer**! (correct me if i'm wrong!)
12
 
 
 
13
  ## Model Details
14
  The autoencoder (tokenizer) is a relative small model using 50M parameters and the transformer model uses 184M parameters and the core is based on GPT2-small.
15
  Due to hardware contraints it's trained using a codebook/vocabablity size of 2048.<br/>
 
3
  pipeline_tag: text-to-3d
4
  ---
5
 
6
+ <p style="text-align: center;">
7
+ <a href="https://huggingface.co/spaces/MarcusLoren/MeshGPT" style="font-size: 24px; font-weight: 600;">DEMO</a>
8
+ </p>
9
 
10
  ### MeshGPT-alpha-preview
11
 
 
13
  The autoencoder's purpose is to be able to translate 3D models into tokens which then the decoder part of it can convert back to 3D mesh.<br/>
14
  For all purposes and definitions the autoencoder is the **world first** published **3D model tokenizer**! (correct me if i'm wrong!)
15
 
16
+
17
+
18
  ## Model Details
19
  The autoencoder (tokenizer) is a relative small model using 50M parameters and the transformer model uses 184M parameters and the core is based on GPT2-small.
20
  Due to hardware contraints it's trained using a codebook/vocabablity size of 2048.<br/>