ekurtic commited on
Commit
3c851ce
1 Parent(s): d632783

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-downstream-pruned-unstructured-80-mnli
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 80%`.
7
+
8
+ ```
9
+ Pruning method: oBERT downstream unstructured
10
+ Paper: https://arxiv.org/abs/2203.07259
11
+ Dataset: MNLI
12
+ Sparsity: 80%
13
+ Number of layers: 12
14
+ ```
15
+
16
+ The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
17
+
18
+ ```
19
+ | oBERT 80% | m-acc | mm-acc|
20
+ | ------------ | ----- | ----- |
21
+ | seed=42 | 84.30 | 84.98 |
22
+ | seed=3407 (*)| 84.46 | 84.99 |
23
+ | seed=54321 | 84.18 | 84.76 |
24
+ | ------------ | ----- | ----- |
25
+ | mean | 84.32 | 84.91 |
26
+ | stdev | 0.140 | 0.133 |
27
+ ```
28
+
29
+ Code: _coming soon_
30
+
31
+ ## BibTeX entry and citation info
32
+ ```bibtex
33
+ @article{kurtic2022optimal,
34
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
35
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
36
+ journal={arXiv preprint arXiv:2203.07259},
37
+ year={2022}
38
+ }
39
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02556e2d789824e162462c0fef2a397fbb3ad0f32d09e4efba81bb32a28153fc
3
+ size 806
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ecc5f4bda1b1ec69e586ea9fcc4b442fbd5338d0a924953bb692b1a0c64deb2
3
+ size 825
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd92b5dda8890b4e4ec34ea2314ea584d331075b41209edb163447e0e032f5f5
3
+ size 354
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3e991d42c0544a922bf76124f1af77abadea33e2e7be72c16c3de3addc4143c
3
+ size 438027529
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a863c20bb9664ba983f10e20d34c790e0eea92f165fc4716c4bad62f6bdc70b4
3
+ size 285
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7654554febfc43051645e62606237b5511332e0c81343f0e39f7b288807b6a5f
3
+ size 473
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae564f087d50b4e588555c2058a11271bb0dee7ba3460aec106a83e8a4914908
3
+ size 97735
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffcec7c1c079038abaaffaf4763786c7cce67c45107db0badcf32ffda96df188
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff