anthonymeo commited on
Commit
3f04944
1 Parent(s): a815a94

Delete model-files

Browse files
model-files/1_Pooling/config.json DELETED
@@ -1,10 +0,0 @@
1
- {
2
- "word_embedding_dimension": 768,
3
- "pooling_mode_cls_token": true,
4
- "pooling_mode_mean_tokens": false,
5
- "pooling_mode_max_tokens": false,
6
- "pooling_mode_mean_sqrt_len_tokens": false,
7
- "pooling_mode_weightedmean_tokens": false,
8
- "pooling_mode_lasttoken": false,
9
- "include_prompt": true
10
- }
 
 
 
 
 
 
 
 
 
 
 
model-files/README.md DELETED
@@ -1,353 +0,0 @@
1
- ---
2
- base_model: sentence-transformers/msmarco-distilbert-base-tas-b
3
- datasets: []
4
- language: []
5
- library_name: sentence-transformers
6
- pipeline_tag: sentence-similarity
7
- tags:
8
- - sentence-transformers
9
- - sentence-similarity
10
- - feature-extraction
11
- - generated_from_trainer
12
- - dataset_size:6192
13
- - loss:MultipleNegativesRankingLoss
14
- widget:
15
- - source_sentence: how to calculate a service load
16
- sentences:
17
- - what is the height of a lead in antenna
18
- - types se cable
19
- - what is the purpose of a circuit breaker
20
- - source_sentence: minimum ampacity for ungrounded conductors
21
- sentences:
22
- - types of mv cables
23
- - can optical fiber cables be installed in raceway
24
- - what is a motor and motor operated equipment
25
- - source_sentence: what is the code for a circuit breaker
26
- sentences:
27
- - what color insulation is required to be grounded
28
- - what is a suitable marker for antflix
29
- - what conductors are permitted to originate in auxiliary gutter
30
- - source_sentence: what is plfa cable
31
- sentences:
32
- - what is a noncombustible surface
33
- - how much liquid can be enclosed in a capacitor
34
- - what is flammable gas in a busway
35
- - source_sentence: how many volts to ground a transformer
36
- sentences:
37
- - what is a grounded conductor
38
- - how long is a plenum cable
39
- - what is the operating voltage of a transformer
40
- ---
41
-
42
- # SentenceTransformer based on sentence-transformers/msmarco-distilbert-base-tas-b
43
-
44
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
45
-
46
- ## Model Details
47
-
48
- ### Model Description
49
- - **Model Type:** Sentence Transformer
50
- - **Base model:** [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) <!-- at revision 996dfc6404137c6d89c7bf647a4bae62fdf8dd9a -->
51
- - **Maximum Sequence Length:** 1024 tokens
52
- - **Output Dimensionality:** 768 tokens
53
- - **Similarity Function:** Cosine Similarity
54
- <!-- - **Training Dataset:** Unknown -->
55
- <!-- - **Language:** Unknown -->
56
- <!-- - **License:** Unknown -->
57
-
58
- ### Model Sources
59
-
60
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
61
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
62
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
63
-
64
- ### Full Model Architecture
65
-
66
- ```
67
- SentenceTransformer(
68
- (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: DistilBertModel
69
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
70
- )
71
- ```
72
-
73
- ## Usage
74
-
75
- ### Direct Usage (Sentence Transformers)
76
-
77
- First install the Sentence Transformers library:
78
-
79
- ```bash
80
- pip install -U sentence-transformers
81
- ```
82
-
83
- Then you can load this model and run inference.
84
- ```python
85
- from sentence_transformers import SentenceTransformer
86
-
87
- # Download from the 🤗 Hub
88
- model = SentenceTransformer("sentence_transformers_model_id")
89
- # Run inference
90
- sentences = [
91
- 'how many volts to ground a transformer',
92
- 'how long is a plenum cable',
93
- 'what is the operating voltage of a transformer',
94
- ]
95
- embeddings = model.encode(sentences)
96
- print(embeddings.shape)
97
- # [3, 768]
98
-
99
- # Get the similarity scores for the embeddings
100
- similarities = model.similarity(embeddings, embeddings)
101
- print(similarities.shape)
102
- # [3, 3]
103
- ```
104
-
105
- <!--
106
- ### Direct Usage (Transformers)
107
-
108
- <details><summary>Click to see the direct usage in Transformers</summary>
109
-
110
- </details>
111
- -->
112
-
113
- <!--
114
- ### Downstream Usage (Sentence Transformers)
115
-
116
- You can finetune this model on your own dataset.
117
-
118
- <details><summary>Click to expand</summary>
119
-
120
- </details>
121
- -->
122
-
123
- <!--
124
- ### Out-of-Scope Use
125
-
126
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
127
- -->
128
-
129
- <!--
130
- ## Bias, Risks and Limitations
131
-
132
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
133
- -->
134
-
135
- <!--
136
- ### Recommendations
137
-
138
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
139
- -->
140
-
141
- ## Training Details
142
-
143
- ### Training Dataset
144
-
145
- #### Unnamed Dataset
146
-
147
-
148
- * Size: 6,192 training samples
149
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
150
- * Approximate statistics based on the first 1000 samples:
151
- | | sentence_0 | sentence_1 | sentence_2 |
152
- |:--------|:---------------------------------------------------------------------------------|:-------------------|:-------------------|
153
- | type | string | dict | dict |
154
- | details | <ul><li>min: 5 tokens</li><li>mean: 9.71 tokens</li><li>max: 33 tokens</li></ul> | <ul><li></li></ul> | <ul><li></li></ul> |
155
- * Samples:
156
- | sentence_0 | sentence_1 | sentence_2 |
157
- |:----------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
158
- | <code>what is a metal water piping system</code> | <code>{'content': 'Metal water piping system(s) installed in or attached to a building or structure shall be bonded to any of the following: Service equipment enclosureGrounded conductor at the serviceGrounding electrode conductor, if of sufficient sizeOne or more grounding electrodes used, if the grounding electrode conductor or bonding jumper to the grounding electrode is of sufficient sizeThe bonding jumper(s) shall be installed in accordance with 250.64(A), (B), and (E). The points of attachment of the bonding jumper(s) s'}</code> | <code>{'content': 'Metal fences enclosing, and other metal structures in or surrounding, a substation with exposed electrical conductors and equipment shall be grounded and bonded to limit step, touch, and transfer voltages. [250.194](https://2023.antflix.net#250.194)'}</code> |
159
- | <code>how many amperes should a circuit breaker be</code> | <code>{'content': '10 amperes, provided all the following conditions are met: Continuous loads do not exceed 8 amperesOvercurrent protection is provided by one of the following:Branch-circuit-rated circuit breakers are listed and marked for use with 14 AWG copper-clad aluminum conductor.Branch-circuit-rated fuses are listed and marked for use with 14 AWG copper-clad aluminum conductor. [240.4(D)(3)](https://2023.antflix.net#240.4(D)(3))'}</code> | <code>{'content': 'For installations to supply only limited loads of a single branch circuit, the branch circuit disconnecting means shall have a rating of not less than 15 amperes. [225.39(A)](https://2023.antflix.net#225.39(A))'}</code> |
160
- | <code>phase converter installation</code> | <code>{'content': 'This article covers the installation and use of phase converters. [455.1](https://2023.antflix.net#455.1)'}</code> | <code>{'content': 'The 120-volt ac side of the voltage converter shall be wired in full conformity with the requirements of Parts I, II, and IV of this article for 120-volt electrical systems. Exception: Converters supplied as an integral part of a listed appliance shall not be subject to 551.20(B). All converters and transformers shall be listed for use in recreational vehicles and designed or equipped to provide overtemperature protection. To determine the converter rating, the following percentages shall be applied to the '}</code> |
161
- * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
162
- ```json
163
- {
164
- "scale": 20.0,
165
- "similarity_fct": "cos_sim"
166
- }
167
- ```
168
-
169
- ### Training Hyperparameters
170
- #### Non-Default Hyperparameters
171
-
172
- - `per_device_train_batch_size`: 16
173
- - `per_device_eval_batch_size`: 16
174
- - `num_train_epochs`: 10
175
- - `multi_dataset_batch_sampler`: round_robin
176
-
177
- #### All Hyperparameters
178
- <details><summary>Click to expand</summary>
179
-
180
- - `overwrite_output_dir`: False
181
- - `do_predict`: False
182
- - `prediction_loss_only`: True
183
- - `per_device_train_batch_size`: 16
184
- - `per_device_eval_batch_size`: 16
185
- - `per_gpu_train_batch_size`: None
186
- - `per_gpu_eval_batch_size`: None
187
- - `gradient_accumulation_steps`: 1
188
- - `eval_accumulation_steps`: None
189
- - `learning_rate`: 5e-05
190
- - `weight_decay`: 0.0
191
- - `adam_beta1`: 0.9
192
- - `adam_beta2`: 0.999
193
- - `adam_epsilon`: 1e-08
194
- - `max_grad_norm`: 1
195
- - `num_train_epochs`: 10
196
- - `max_steps`: -1
197
- - `lr_scheduler_type`: linear
198
- - `lr_scheduler_kwargs`: {}
199
- - `warmup_ratio`: 0.0
200
- - `warmup_steps`: 0
201
- - `log_level`: passive
202
- - `log_level_replica`: warning
203
- - `log_on_each_node`: True
204
- - `logging_nan_inf_filter`: True
205
- - `save_safetensors`: True
206
- - `save_on_each_node`: False
207
- - `save_only_model`: False
208
- - `no_cuda`: False
209
- - `use_cpu`: False
210
- - `use_mps_device`: False
211
- - `seed`: 42
212
- - `data_seed`: None
213
- - `jit_mode_eval`: False
214
- - `use_ipex`: False
215
- - `bf16`: False
216
- - `fp16`: False
217
- - `fp16_opt_level`: O1
218
- - `half_precision_backend`: auto
219
- - `bf16_full_eval`: False
220
- - `fp16_full_eval`: False
221
- - `tf32`: None
222
- - `local_rank`: 0
223
- - `ddp_backend`: None
224
- - `tpu_num_cores`: None
225
- - `tpu_metrics_debug`: False
226
- - `debug`: []
227
- - `dataloader_drop_last`: False
228
- - `dataloader_num_workers`: 0
229
- - `dataloader_prefetch_factor`: None
230
- - `past_index`: -1
231
- - `disable_tqdm`: False
232
- - `remove_unused_columns`: True
233
- - `label_names`: None
234
- - `load_best_model_at_end`: False
235
- - `ignore_data_skip`: False
236
- - `fsdp`: []
237
- - `fsdp_min_num_params`: 0
238
- - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
239
- - `fsdp_transformer_layer_cls_to_wrap`: None
240
- - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
241
- - `deepspeed`: None
242
- - `label_smoothing_factor`: 0.0
243
- - `optim`: adamw_torch
244
- - `optim_args`: None
245
- - `adafactor`: False
246
- - `group_by_length`: False
247
- - `length_column_name`: length
248
- - `ddp_find_unused_parameters`: None
249
- - `ddp_bucket_cap_mb`: None
250
- - `ddp_broadcast_buffers`: False
251
- - `dataloader_pin_memory`: True
252
- - `dataloader_persistent_workers`: False
253
- - `skip_memory_metrics`: True
254
- - `use_legacy_prediction_loop`: False
255
- - `push_to_hub`: False
256
- - `resume_from_checkpoint`: None
257
- - `hub_model_id`: None
258
- - `hub_strategy`: every_save
259
- - `hub_private_repo`: False
260
- - `hub_always_push`: False
261
- - `gradient_checkpointing`: False
262
- - `gradient_checkpointing_kwargs`: None
263
- - `include_inputs_for_metrics`: False
264
- - `fp16_backend`: auto
265
- - `push_to_hub_model_id`: None
266
- - `push_to_hub_organization`: None
267
- - `mp_parameters`:
268
- - `auto_find_batch_size`: False
269
- - `full_determinism`: False
270
- - `torchdynamo`: None
271
- - `ray_scope`: last
272
- - `ddp_timeout`: 1800
273
- - `torch_compile`: False
274
- - `torch_compile_backend`: None
275
- - `torch_compile_mode`: None
276
- - `dispatch_batches`: None
277
- - `split_batches`: None
278
- - `include_tokens_per_second`: False
279
- - `include_num_input_tokens_seen`: False
280
- - `neftune_noise_alpha`: None
281
- - `optim_target_modules`: None
282
- - `batch_sampler`: batch_sampler
283
- - `multi_dataset_batch_sampler`: round_robin
284
-
285
- </details>
286
-
287
- ### Training Logs
288
- | Epoch | Step | Training Loss |
289
- |:------:|:----:|:-------------:|
290
- | 1.2920 | 500 | 0.2747 |
291
- | 2.5840 | 1000 | 0.0887 |
292
- | 3.8760 | 1500 | 0.0512 |
293
- | 5.1680 | 2000 | 0.0344 |
294
- | 6.4599 | 2500 | 0.0279 |
295
- | 7.7519 | 3000 | 0.0213 |
296
- | 9.0439 | 3500 | 0.02 |
297
-
298
-
299
- ### Framework Versions
300
- - Python: 3.10.13
301
- - Sentence Transformers: 3.0.1
302
- - Transformers: 4.39.3
303
- - PyTorch: 2.1.2
304
- - Accelerate: 0.32.1
305
- - Datasets: 2.20.0
306
- - Tokenizers: 0.15.2
307
-
308
- ## Citation
309
-
310
- ### BibTeX
311
-
312
- #### Sentence Transformers
313
- ```bibtex
314
- @inproceedings{reimers-2019-sentence-bert,
315
- title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
316
- author = "Reimers, Nils and Gurevych, Iryna",
317
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
318
- month = "11",
319
- year = "2019",
320
- publisher = "Association for Computational Linguistics",
321
- url = "https://arxiv.org/abs/1908.10084",
322
- }
323
- ```
324
-
325
- #### MultipleNegativesRankingLoss
326
- ```bibtex
327
- @misc{henderson2017efficient,
328
- title={Efficient Natural Language Response Suggestion for Smart Reply},
329
- author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
330
- year={2017},
331
- eprint={1705.00652},
332
- archivePrefix={arXiv},
333
- primaryClass={cs.CL}
334
- }
335
- ```
336
-
337
- <!--
338
- ## Glossary
339
-
340
- *Clearly define terms in order to be accessible across audiences.*
341
- -->
342
-
343
- <!--
344
- ## Model Card Authors
345
-
346
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
347
- -->
348
-
349
- <!--
350
- ## Model Card Contact
351
-
352
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
353
- -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-files/config.json DELETED
@@ -1,24 +0,0 @@
1
- {
2
- "_name_or_path": "sentence-transformers/msmarco-distilbert-base-tas-b",
3
- "activation": "gelu",
4
- "architectures": [
5
- "DistilBertModel"
6
- ],
7
- "attention_dropout": 0.1,
8
- "dim": 768,
9
- "dropout": 0.1,
10
- "hidden_dim": 3072,
11
- "initializer_range": 0.02,
12
- "max_position_embeddings": 512,
13
- "model_type": "distilbert",
14
- "n_heads": 12,
15
- "n_layers": 6,
16
- "pad_token_id": 0,
17
- "qa_dropout": 0.1,
18
- "seq_classif_dropout": 0.2,
19
- "sinusoidal_pos_embds": false,
20
- "tie_weights_": true,
21
- "torch_dtype": "float32",
22
- "transformers_version": "4.39.3",
23
- "vocab_size": 30522
24
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-files/config_sentence_transformers.json DELETED
@@ -1,10 +0,0 @@
1
- {
2
- "__version__": {
3
- "sentence_transformers": "3.0.1",
4
- "transformers": "4.39.3",
5
- "pytorch": "2.1.2"
6
- },
7
- "prompts": {},
8
- "default_prompt_name": null,
9
- "similarity_fn_name": null
10
- }
 
 
 
 
 
 
 
 
 
 
 
model-files/model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:56ee031bd8b87ef0a998590ee2c3b0ffcad36deae7dc95b546a148ecf8e5ef77
3
- size 265462608
 
 
 
 
model-files/modules.json DELETED
@@ -1,14 +0,0 @@
1
- [
2
- {
3
- "idx": 0,
4
- "name": "0",
5
- "path": "",
6
- "type": "sentence_transformers.models.Transformer"
7
- },
8
- {
9
- "idx": 1,
10
- "name": "1",
11
- "path": "1_Pooling",
12
- "type": "sentence_transformers.models.Pooling"
13
- }
14
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-files/sentence_bert_config.json DELETED
@@ -1,4 +0,0 @@
1
- {
2
- "max_seq_length": 1024,
3
- "do_lower_case": false
4
- }
 
 
 
 
 
model-files/special_tokens_map.json DELETED
@@ -1,37 +0,0 @@
1
- {
2
- "cls_token": {
3
- "content": "[CLS]",
4
- "lstrip": false,
5
- "normalized": false,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "mask_token": {
10
- "content": "[MASK]",
11
- "lstrip": false,
12
- "normalized": false,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "pad_token": {
17
- "content": "[PAD]",
18
- "lstrip": false,
19
- "normalized": false,
20
- "rstrip": false,
21
- "single_word": false
22
- },
23
- "sep_token": {
24
- "content": "[SEP]",
25
- "lstrip": false,
26
- "normalized": false,
27
- "rstrip": false,
28
- "single_word": false
29
- },
30
- "unk_token": {
31
- "content": "[UNK]",
32
- "lstrip": false,
33
- "normalized": false,
34
- "rstrip": false,
35
- "single_word": false
36
- }
37
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-files/tokenizer.json DELETED
The diff for this file is too large to render. See raw diff
 
model-files/tokenizer_config.json DELETED
@@ -1,57 +0,0 @@
1
- {
2
- "added_tokens_decoder": {
3
- "0": {
4
- "content": "[PAD]",
5
- "lstrip": false,
6
- "normalized": false,
7
- "rstrip": false,
8
- "single_word": false,
9
- "special": true
10
- },
11
- "100": {
12
- "content": "[UNK]",
13
- "lstrip": false,
14
- "normalized": false,
15
- "rstrip": false,
16
- "single_word": false,
17
- "special": true
18
- },
19
- "101": {
20
- "content": "[CLS]",
21
- "lstrip": false,
22
- "normalized": false,
23
- "rstrip": false,
24
- "single_word": false,
25
- "special": true
26
- },
27
- "102": {
28
- "content": "[SEP]",
29
- "lstrip": false,
30
- "normalized": false,
31
- "rstrip": false,
32
- "single_word": false,
33
- "special": true
34
- },
35
- "103": {
36
- "content": "[MASK]",
37
- "lstrip": false,
38
- "normalized": false,
39
- "rstrip": false,
40
- "single_word": false,
41
- "special": true
42
- }
43
- },
44
- "clean_up_tokenization_spaces": true,
45
- "cls_token": "[CLS]",
46
- "do_basic_tokenize": true,
47
- "do_lower_case": true,
48
- "mask_token": "[MASK]",
49
- "model_max_length": 512,
50
- "never_split": null,
51
- "pad_token": "[PAD]",
52
- "sep_token": "[SEP]",
53
- "strip_accents": null,
54
- "tokenize_chinese_chars": true,
55
- "tokenizer_class": "DistilBertTokenizer",
56
- "unk_token": "[UNK]"
57
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-files/vocab.txt DELETED
The diff for this file is too large to render. See raw diff