Commit
•
6427986
0
Parent(s):
Duplicate from openai/shap-e
Browse filesCo-authored-by: Patrick von Platen <patrickvonplaten@users.noreply.huggingface.co>
- .gitattributes +34 -0
- README.md +107 -0
- model_index.json +28 -0
- prior/config.json +19 -0
- prior/diffusion_pytorch_model.bin +3 -0
- prior/diffusion_pytorch_model.fp16.safetensors +3 -0
- renderer/config.json +34 -0
- renderer/diffusion_pytorch_model.bin +3 -0
- renderer/diffusion_pytorch_model.fp16.safetensors +3 -0
- scheduler/scheduler_config.json +11 -0
- shap_e_renderer/config.json +39 -0
- shap_e_renderer/diffusion_pytorch_model.bin +3 -0
- text_encoder/config.json +25 -0
- text_encoder/model.fp16.safetensors +3 -0
- text_encoder/pytorch_model.bin +3 -0
- tokenizer/merges.txt +0 -0
- tokenizer/special_tokens_map.json +24 -0
- tokenizer/tokenizer_config.json +33 -0
- tokenizer/vocab.json +0 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
tags:
|
4 |
+
- text-to-image
|
5 |
+
- shap-e
|
6 |
+
- diffusers
|
7 |
+
pipeline_tag: text-to-3d
|
8 |
+
---
|
9 |
+
|
10 |
+
# Shap-E
|
11 |
+
|
12 |
+
Shap-E introduces a diffusion process that can generate a 3D image from a text prompt. It was introduced in [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463) by Heewoo Jun and Alex Nichol from OpenAI.
|
13 |
+
|
14 |
+
Original repository of Shap-E can be found here: https://github.com/openai/shap-e.
|
15 |
+
|
16 |
+
_The authors of Shap-E didn't author this model card. They provide a separate model card [here](https://github.com/openai/shap-e/blob/main/model-card.md)._
|
17 |
+
|
18 |
+
## Introduction
|
19 |
+
|
20 |
+
The abstract of the Shap-E paper:
|
21 |
+
|
22 |
+
*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at [this https URL](https://github.com/openai/shap-e).*
|
23 |
+
|
24 |
+
## Released checkpoints
|
25 |
+
|
26 |
+
The authors released the following checkpoints:
|
27 |
+
|
28 |
+
* [openai/shap-e](https://hf.co/openai/shap-e): produces a 3D image from a text input prompt
|
29 |
+
* [openai/shap-e-img2img](https://hf.co/openai/shap-e-img2img): samples a 3D image from synthetic 2D image
|
30 |
+
|
31 |
+
## Usage examples in 🧨 diffusers
|
32 |
+
|
33 |
+
First make sure you have installed all the dependencies:
|
34 |
+
|
35 |
+
```bash
|
36 |
+
pip install transformers accelerate -q
|
37 |
+
pip install git+https://github.com/huggingface/diffusers@@shap-ee
|
38 |
+
```
|
39 |
+
|
40 |
+
Once the dependencies are installed, use the code below:
|
41 |
+
|
42 |
+
```python
|
43 |
+
import torch
|
44 |
+
from diffusers import ShapEPipeline
|
45 |
+
from diffusers.utils import export_to_gif
|
46 |
+
|
47 |
+
|
48 |
+
ckpt_id = "openai/shap-e"
|
49 |
+
pipe = ShapEPipeline.from_pretrained(repo).to("cuda")
|
50 |
+
|
51 |
+
|
52 |
+
guidance_scale = 15.0
|
53 |
+
prompt = "a shark"
|
54 |
+
images = pipe(
|
55 |
+
prompt,
|
56 |
+
guidance_scale=guidance_scale,
|
57 |
+
num_inference_steps=64,
|
58 |
+
size=256,
|
59 |
+
).images
|
60 |
+
|
61 |
+
gif_path = export_to_gif(images, "shark_3d.gif")
|
62 |
+
```
|
63 |
+
|
64 |
+
## Results
|
65 |
+
|
66 |
+
<table>
|
67 |
+
<tbody>
|
68 |
+
<tr>
|
69 |
+
<td align="center">
|
70 |
+
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/bird_3d.gif" alt="a bird">
|
71 |
+
</td>
|
72 |
+
<td align="center">
|
73 |
+
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/shark_3d.gif" alt="a shark">
|
74 |
+
</td align="center">
|
75 |
+
<td align="center">
|
76 |
+
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/veg_3d.gif" alt="A bowl of vegetables">
|
77 |
+
</td>
|
78 |
+
</tr>
|
79 |
+
<tr>
|
80 |
+
<td align="center">A bird</td>
|
81 |
+
<td align="center">A shark</td>
|
82 |
+
<td align="center">A bowl of vegetables</td>
|
83 |
+
</tr>
|
84 |
+
</tr>
|
85 |
+
</tbody>
|
86 |
+
<table>
|
87 |
+
|
88 |
+
## Training details
|
89 |
+
|
90 |
+
Refer to the [original paper](https://arxiv.org/abs/2305.02463).
|
91 |
+
|
92 |
+
## Known limitations and potential biases
|
93 |
+
|
94 |
+
Refer to the [original model card](https://github.com/openai/shap-e/blob/main/model-card.md).
|
95 |
+
|
96 |
+
## Citation
|
97 |
+
|
98 |
+
```bibtex
|
99 |
+
@misc{jun2023shape,
|
100 |
+
title={Shap-E: Generating Conditional 3D Implicit Functions},
|
101 |
+
author={Heewoo Jun and Alex Nichol},
|
102 |
+
year={2023},
|
103 |
+
eprint={2305.02463},
|
104 |
+
archivePrefix={arXiv},
|
105 |
+
primaryClass={cs.CV}
|
106 |
+
}
|
107 |
+
```
|
model_index.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ShapEPipeline",
|
3 |
+
"_diffusers_version": "0.17.0.dev0",
|
4 |
+
"scheduler": [
|
5 |
+
"diffusers",
|
6 |
+
"HeunDiscreteScheduler"
|
7 |
+
],
|
8 |
+
"text_encoder": [
|
9 |
+
"transformers",
|
10 |
+
"CLIPTextModelWithProjection"
|
11 |
+
],
|
12 |
+
"tokenizer": [
|
13 |
+
"transformers",
|
14 |
+
"CLIPTokenizer"
|
15 |
+
],
|
16 |
+
"prior": [
|
17 |
+
"diffusers",
|
18 |
+
"PriorTransformer"
|
19 |
+
],
|
20 |
+
"renderer": [
|
21 |
+
"shap_e",
|
22 |
+
"ShapERenderer"
|
23 |
+
],
|
24 |
+
"shap_e_renderer": [
|
25 |
+
"shap_e",
|
26 |
+
"ShapERenderer"
|
27 |
+
]
|
28 |
+
}
|
prior/config.json
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "PriorTransformer",
|
3 |
+
"_diffusers_version": "0.18.0.dev0",
|
4 |
+
"added_emb_type": null,
|
5 |
+
"additional_embeddings": 0,
|
6 |
+
"attention_head_dim": 64,
|
7 |
+
"clip_embed_dim": 2048,
|
8 |
+
"dropout": 0.0,
|
9 |
+
"embedding_dim": 1024,
|
10 |
+
"embedding_proj_dim": 768,
|
11 |
+
"embedding_proj_norm_type": null,
|
12 |
+
"encoder_hid_proj_type": null,
|
13 |
+
"norm_in_type": "layer",
|
14 |
+
"num_attention_heads": 16,
|
15 |
+
"num_embeddings": 1024,
|
16 |
+
"num_layers": 24,
|
17 |
+
"time_embed_act_fn": "gelu",
|
18 |
+
"time_embed_dim": 4096
|
19 |
+
}
|
prior/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:df21cb49c7f10eb02f6ce485a59c86601d03707b80d715335f0be6be89b1226e
|
3 |
+
size 1262937295
|
prior/diffusion_pytorch_model.fp16.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:04ad95e7374796a817f5b5a32b885be457b013ffdadf5da2f99883886f417aac
|
3 |
+
size 631435880
|
renderer/config.json
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ShapERenderer",
|
3 |
+
"_diffusers_version": "0.18.0.dev0",
|
4 |
+
"act_fn": "swish",
|
5 |
+
"d_hidden": 256,
|
6 |
+
"d_latent": 1024,
|
7 |
+
"insert_direction_at": 4,
|
8 |
+
"n_hidden_layers": 6,
|
9 |
+
"n_output": 12,
|
10 |
+
"param_names": [
|
11 |
+
"nerstf.mlp.0.weight",
|
12 |
+
"nerstf.mlp.1.weight",
|
13 |
+
"nerstf.mlp.2.weight",
|
14 |
+
"nerstf.mlp.3.weight"
|
15 |
+
],
|
16 |
+
"param_shapes": [
|
17 |
+
[
|
18 |
+
256,
|
19 |
+
93
|
20 |
+
],
|
21 |
+
[
|
22 |
+
256,
|
23 |
+
256
|
24 |
+
],
|
25 |
+
[
|
26 |
+
256,
|
27 |
+
256
|
28 |
+
],
|
29 |
+
[
|
30 |
+
256,
|
31 |
+
256
|
32 |
+
]
|
33 |
+
]
|
34 |
+
}
|
renderer/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2751cbad694f7fef9022f88673c065df49283d977ff93e88e133f3585e2dfd28
|
3 |
+
size 905200536
|
renderer/diffusion_pytorch_model.fp16.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:394d52e159447b2bfb944d3bfa4f438180b72ba40644dc4c730282e7a9b62c65
|
3 |
+
size 452597858
|
scheduler/scheduler_config.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "HeunDiscreteScheduler",
|
3 |
+
"_diffusers_version": "0.17.0.dev0",
|
4 |
+
"beta_schedule": "exp",
|
5 |
+
"trained_betas": null,
|
6 |
+
"num_train_timesteps": 1024,
|
7 |
+
"clip_sample": true,
|
8 |
+
"clip_sample_range": 1.0,
|
9 |
+
"prediction_type": "sample",
|
10 |
+
"use_karras_sigmas": true
|
11 |
+
}
|
shap_e_renderer/config.json
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ShapERenderer",
|
3 |
+
"_diffusers_version": "0.19.0.dev0",
|
4 |
+
"act_fn": "swish",
|
5 |
+
"background": [
|
6 |
+
255.0,
|
7 |
+
255.0,
|
8 |
+
255.0
|
9 |
+
],
|
10 |
+
"d_hidden": 256,
|
11 |
+
"d_latent": 1024,
|
12 |
+
"insert_direction_at": 4,
|
13 |
+
"n_hidden_layers": 6,
|
14 |
+
"n_output": 12,
|
15 |
+
"param_names": [
|
16 |
+
"nerstf.mlp.0.weight",
|
17 |
+
"nerstf.mlp.1.weight",
|
18 |
+
"nerstf.mlp.2.weight",
|
19 |
+
"nerstf.mlp.3.weight"
|
20 |
+
],
|
21 |
+
"param_shapes": [
|
22 |
+
[
|
23 |
+
256,
|
24 |
+
93
|
25 |
+
],
|
26 |
+
[
|
27 |
+
256,
|
28 |
+
256
|
29 |
+
],
|
30 |
+
[
|
31 |
+
256,
|
32 |
+
256
|
33 |
+
],
|
34 |
+
[
|
35 |
+
256,
|
36 |
+
256
|
37 |
+
]
|
38 |
+
]
|
39 |
+
}
|
shap_e_renderer/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b63c2b52fbff7cc2c0b88b196f169e8adbcc2c298cadab30afe63c8a7241a05
|
3 |
+
size 905233202
|
text_encoder/config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "openai/clip-vit-large-patch14",
|
3 |
+
"architectures": [
|
4 |
+
"CLIPTextModelWithProjection"
|
5 |
+
],
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"dropout": 0.0,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "quick_gelu",
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_factor": 1.0,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 3072,
|
15 |
+
"layer_norm_eps": 1e-05,
|
16 |
+
"max_position_embeddings": 77,
|
17 |
+
"model_type": "clip_text_model",
|
18 |
+
"num_attention_heads": 12,
|
19 |
+
"num_hidden_layers": 12,
|
20 |
+
"pad_token_id": 1,
|
21 |
+
"projection_dim": 768,
|
22 |
+
"torch_dtype": "float32",
|
23 |
+
"transformers_version": "4.29.2",
|
24 |
+
"vocab_size": 49408
|
25 |
+
}
|
text_encoder/model.fp16.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc42badf529dd83f2f7c3d20fe6bda1e22036162f37c4c668b9e130884e20561
|
3 |
+
size 247324608
|
text_encoder/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85f5bcf101dde33d8ab9f7e5e1678339fa4258ea07bc65e6ca66e01f9de99622
|
3 |
+
size 494664885
|
tokenizer/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer/special_tokens_map.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": "<|endoftext|>",
|
17 |
+
"unk_token": {
|
18 |
+
"content": "<|endoftext|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": true,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
}
|
24 |
+
}
|
tokenizer/tokenizer_config.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": {
|
4 |
+
"__type": "AddedToken",
|
5 |
+
"content": "<|startoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": true,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false
|
10 |
+
},
|
11 |
+
"clean_up_tokenization_spaces": true,
|
12 |
+
"do_lower_case": true,
|
13 |
+
"eos_token": {
|
14 |
+
"__type": "AddedToken",
|
15 |
+
"content": "<|endoftext|>",
|
16 |
+
"lstrip": false,
|
17 |
+
"normalized": true,
|
18 |
+
"rstrip": false,
|
19 |
+
"single_word": false
|
20 |
+
},
|
21 |
+
"errors": "replace",
|
22 |
+
"model_max_length": 77,
|
23 |
+
"pad_token": "<|endoftext|>",
|
24 |
+
"tokenizer_class": "CLIPTokenizer",
|
25 |
+
"unk_token": {
|
26 |
+
"__type": "AddedToken",
|
27 |
+
"content": "<|endoftext|>",
|
28 |
+
"lstrip": false,
|
29 |
+
"normalized": true,
|
30 |
+
"rstrip": false,
|
31 |
+
"single_word": false
|
32 |
+
}
|
33 |
+
}
|
tokenizer/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|