michaelfeil commited on
Commit
942cdd0
1 Parent(s): 2095df5

Upload Salesforce/codet5p-770m ctranslate fp16 weights

Browse files
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - ctranslate2
4
+ - int8
5
+ - float16
6
+
7
+ license: bsd-3-clause
8
+ ---
9
+ # # Fast-Inference with Ctranslate2
10
+ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
11
+
12
+ quantized version of [Salesforce/codet5p-770m](https://huggingface.co/Salesforce/codet5p-770m)
13
+ ```bash
14
+ pip install hf-hub-ctranslate2>=2.0.6
15
+ ```
16
+ Converted on 2023-05-20 using
17
+ ```
18
+ ct2-transformers-converter --model Salesforce/codet5p-770m --output_dir /home/michael/tmp-ct2fast-codet5p-770m --force --copy_files merges.txt README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
19
+ ```
20
+
21
+ Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
22
+ - `compute_type=int8_float16` for `device="cuda"`
23
+ - `compute_type=int8` for `device="cpu"`
24
+
25
+ ```python
26
+ from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
27
+ from transformers import AutoTokenizer
28
+
29
+ model_name = "michaelfeil/ct2fast-codet5p-770m"
30
+ # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
31
+ model = TranslatorCT2fromHfHub(
32
+ # load in int8 on CUDA
33
+ model_name_or_path=model_name,
34
+ device="cuda",
35
+ compute_type="int8_float16",
36
+ tokenizer=AutoTokenizer.from_pretrained("Salesforce/codet5p-770m")
37
+ )
38
+ outputs = model.generate(
39
+ text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
40
+ )
41
+ print(outputs)
42
+ ```
43
+
44
+ # Licence and other remarks:
45
+ This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
46
+
47
+ # Original description
48
+
49
+ tags:
50
+ - ctranslate2
51
+ - int8
52
+ - float16
53
+
54
+
55
+ # CodeT5+ 770M
56
+
57
+ ## Model description
58
+
59
+ [CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
60
+ It is introduced in the paper:
61
+
62
+ [CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
63
+ by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
64
+
65
+ Compared to the original CodeT5 family (CodeT5-base: `220M`, CodeT5-large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
66
+ Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
67
+ Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
68
+
69
+ ## How to use
70
+
71
+ This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
72
+
73
+ ```python
74
+ from transformers import T5ForConditionalGeneration, AutoTokenizer
75
+
76
+ checkpoint = "Salesforce/codet5p-770m"
77
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
80
+ model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
81
+
82
+ inputs = tokenizer.encode("def print_hello_world():<extra_id_0>", return_tensors="pt").to(device)
83
+ outputs = model.generate(inputs, max_length=10)
84
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
85
+ # ==> print "Hello World"
86
+ ```
87
+
88
+ ## Pretraining data
89
+
90
+ This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
91
+ The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
92
+ Supported languages (9 in total) are as follows:
93
+ `c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
94
+
95
+ ## Training procedure
96
+
97
+ This checkpoint is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
98
+ Please refer to the paper for more details.
99
+
100
+ ## Evaluation results
101
+
102
+ CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
103
+ Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
104
+ 8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
105
+ In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
106
+ Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
107
+ Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
108
+
109
+
110
+ ## BibTeX entry and citation info
111
+
112
+ ```bibtex
113
+ @article{wang2023codet5plus,
114
+ title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
115
+ author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
116
+ journal={arXiv preprint},
117
+ year={2023}
118
+ }
119
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<pad>",
5
+ "decoder_start_token": "<pad>",
6
+ "eos_token": "</s>",
7
+ "unk_token": "<unk>"
8
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86ac369ecfb9ebd1bfbe18aeca7929539554af33242ca2f9960d4fd9706dc7fc
3
+ size 1475314606
shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "single_word": false,
5
+ "lstrip": false,
6
+ "rstrip": false,
7
+ "normalized": true
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "single_word": false,
12
+ "lstrip": false,
13
+ "rstrip": false,
14
+ "normalized": true
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "single_word": false,
19
+ "lstrip": false,
20
+ "rstrip": false,
21
+ "normalized": true
22
+ },
23
+ "sep_token": {
24
+ "content": "</s>",
25
+ "single_word": false,
26
+ "lstrip": false,
27
+ "rstrip": false,
28
+ "normalized": true
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "single_word": false,
33
+ "lstrip": false,
34
+ "rstrip": false,
35
+ "normalized": true
36
+ },
37
+ "cls_token": {
38
+ "content": "<s>",
39
+ "single_word": false,
40
+ "lstrip": false,
41
+ "rstrip": false,
42
+ "normalized": true
43
+ },
44
+ "mask_token": { "content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
45
+ "additional_special_tokens": [
46
+ { "content":"<extra_id_99>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
47
+ { "content":"<extra_id_98>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
48
+ { "content":"<extra_id_97>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
49
+ { "content":"<extra_id_96>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
50
+ { "content":"<extra_id_95>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
51
+ { "content":"<extra_id_94>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
52
+ { "content":"<extra_id_93>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
53
+ { "content":"<extra_id_92>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
54
+ { "content":"<extra_id_91>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
55
+ { "content":"<extra_id_90>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
56
+ { "content":"<extra_id_89>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
57
+ { "content":"<extra_id_88>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
58
+ { "content":"<extra_id_87>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
59
+ { "content":"<extra_id_86>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
60
+ { "content":"<extra_id_85>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
61
+ { "content":"<extra_id_84>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
62
+ { "content":"<extra_id_83>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
63
+ { "content":"<extra_id_82>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
64
+ { "content":"<extra_id_81>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
65
+ { "content":"<extra_id_80>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
66
+ { "content":"<extra_id_79>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
67
+ { "content":"<extra_id_78>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
68
+ { "content":"<extra_id_77>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
69
+ { "content":"<extra_id_76>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
70
+ { "content":"<extra_id_75>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
71
+ { "content":"<extra_id_74>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
72
+ { "content":"<extra_id_73>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
73
+ { "content":"<extra_id_72>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
74
+ { "content":"<extra_id_71>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
75
+ { "content":"<extra_id_70>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
76
+ { "content":"<extra_id_69>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
77
+ { "content":"<extra_id_68>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
78
+ { "content":"<extra_id_67>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
79
+ { "content":"<extra_id_66>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
80
+ { "content":"<extra_id_65>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
81
+ { "content":"<extra_id_64>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
82
+ { "content":"<extra_id_63>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
83
+ { "content":"<extra_id_62>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
84
+ { "content":"<extra_id_61>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
85
+ { "content":"<extra_id_60>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
86
+ { "content":"<extra_id_59>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
87
+ { "content":"<extra_id_58>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
88
+ { "content":"<extra_id_57>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
89
+ { "content":"<extra_id_56>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
90
+ { "content":"<extra_id_55>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
91
+ { "content":"<extra_id_54>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
92
+ { "content":"<extra_id_53>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
93
+ { "content":"<extra_id_52>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
94
+ { "content":"<extra_id_51>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
95
+ { "content":"<extra_id_50>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
96
+ { "content":"<extra_id_49>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
97
+ { "content":"<extra_id_48>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
98
+ { "content":"<extra_id_47>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
99
+ { "content":"<extra_id_46>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
100
+ { "content":"<extra_id_45>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
101
+ { "content":"<extra_id_44>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
102
+ { "content":"<extra_id_43>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
103
+ { "content":"<extra_id_42>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
104
+ { "content":"<extra_id_41>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
105
+ { "content":"<extra_id_40>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
106
+ { "content":"<extra_id_39>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
107
+ { "content":"<extra_id_38>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
108
+ { "content":"<extra_id_37>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
109
+ { "content":"<extra_id_36>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
110
+ { "content":"<extra_id_35>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
111
+ { "content":"<extra_id_34>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
112
+ { "content":"<extra_id_33>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
113
+ { "content":"<extra_id_32>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
114
+ { "content":"<extra_id_31>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
115
+ { "content":"<extra_id_30>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
116
+ { "content":"<extra_id_29>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
117
+ { "content":"<extra_id_28>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
118
+ { "content":"<extra_id_27>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
119
+ { "content":"<extra_id_26>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
120
+ { "content":"<extra_id_25>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
121
+ { "content":"<extra_id_24>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
122
+ { "content":"<extra_id_23>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
123
+ { "content":"<extra_id_22>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
124
+ { "content":"<extra_id_21>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
125
+ { "content":"<extra_id_20>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
126
+ { "content":"<extra_id_19>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
127
+ { "content":"<extra_id_18>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
128
+ { "content":"<extra_id_17>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
129
+ { "content":"<extra_id_16>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
130
+ { "content":"<extra_id_15>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
131
+ { "content":"<extra_id_14>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
132
+ { "content":"<extra_id_13>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
133
+ { "content":"<extra_id_12>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
134
+ { "content":"<extra_id_11>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
135
+ { "content":"<extra_id_10>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
136
+ { "content":"<extra_id_9>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
137
+ { "content":"<extra_id_8>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
138
+ { "content":"<extra_id_7>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
139
+ { "content":"<extra_id_6>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
140
+ { "content":"<extra_id_5>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
141
+ { "content":"<extra_id_4>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
142
+ { "content":"<extra_id_3>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
143
+ { "content":"<extra_id_2>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
144
+ { "content":"<extra_id_1>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
145
+ { "content":"<extra_id_0>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true }
146
+ ]
147
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "errors": "replace",
3
+ "unk_token": {
4
+ "content": "<unk>",
5
+ "single_word": false,
6
+ "lstrip": false,
7
+ "rstrip": false,
8
+ "normalized": true,
9
+ "__type": "AddedToken"
10
+ },
11
+ "bos_token": {
12
+ "content": "<s>",
13
+ "single_word": false,
14
+ "lstrip": false,
15
+ "rstrip": false,
16
+ "normalized": true,
17
+ "__type": "AddedToken"
18
+ },
19
+ "eos_token": {
20
+ "content": "</s>",
21
+ "single_word": false,
22
+ "lstrip": false,
23
+ "rstrip": false,
24
+ "normalized": true,
25
+ "__type": "AddedToken"
26
+ },
27
+ "add_prefix_space": false,
28
+ "sep_token": {
29
+ "content": "</s>",
30
+ "single_word": false,
31
+ "lstrip": false,
32
+ "rstrip": false,
33
+ "normalized": true,
34
+ "__type": "AddedToken"
35
+ },
36
+ "cls_token": {
37
+ "content": "<s>",
38
+ "single_word": false,
39
+ "lstrip": false,
40
+ "rstrip": false,
41
+ "normalized": true,
42
+ "__type": "AddedToken"
43
+ },
44
+ "pad_token": {
45
+ "content": "<pad>",
46
+ "single_word": false,
47
+ "lstrip": false,
48
+ "rstrip": false,
49
+ "normalized": true,
50
+ "__type": "AddedToken"
51
+ },
52
+ "mask_token": {
53
+ "content": "<mask>",
54
+ "single_word": false,
55
+ "lstrip": true,
56
+ "rstrip": false,
57
+ "normalized": true,
58
+ "__type": "AddedToken"
59
+ },
60
+ "model_max_length": 512,
61
+ "tokenizer_class": "RobertaTokenizer"
62
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff