Update README.md
Browse files
README.md
CHANGED
@@ -8,16 +8,16 @@ metrics:
|
|
8 |
- glue
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
|
13 |
This is the final model described in "Exponentially Faster Language Modelling".
|
14 |
The model has been pretrained just like crammedBERT but with fast feedforward networks (FFF) in place of the traditional feedforward layers.
|
15 |
-
To use this model, you need the code from the repo at https://github.com/pbelcak/
|
16 |
|
17 |
You can find the paper here: https://arxiv.org/abs/2311.10770, and the abstract below:
|
18 |
|
19 |
> Language models only really need to use an exponential fraction of their neurons for individual inferences.
|
20 |
-
> As proof, we present
|
21 |
> While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
|
22 |
|
23 |
|
@@ -37,8 +37,8 @@ This is the raw pretraining checkpoint. You can use this to fine-tune on a downs
|
|
37 |
import cramming
|
38 |
from transformers import AutoModelForMaskedLM, AutoTokenizer
|
39 |
|
40 |
-
tokenizer = AutoTokenizer.from_pretrained("pbelcak/
|
41 |
-
model = AutoModelForMaskedLM.from_pretrained("pbelcak/
|
42 |
|
43 |
text = "Replace me by any text you'd like."
|
44 |
encoded_input = tokenizer(text, return_tensors='pt')
|
|
|
8 |
- glue
|
9 |
---
|
10 |
|
11 |
+
# UltraFastBERT-1x11-long
|
12 |
|
13 |
This is the final model described in "Exponentially Faster Language Modelling".
|
14 |
The model has been pretrained just like crammedBERT but with fast feedforward networks (FFF) in place of the traditional feedforward layers.
|
15 |
+
To use this model, you need the code from the repo at https://github.com/pbelcak/UltraFastBERT.
|
16 |
|
17 |
You can find the paper here: https://arxiv.org/abs/2311.10770, and the abstract below:
|
18 |
|
19 |
> Language models only really need to use an exponential fraction of their neurons for individual inferences.
|
20 |
+
> As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs).
|
21 |
> While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
|
22 |
|
23 |
|
|
|
37 |
import cramming
|
38 |
from transformers import AutoModelForMaskedLM, AutoTokenizer
|
39 |
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained("pbelcak/UltraFastBERT-1x11-long")
|
41 |
+
model = AutoModelForMaskedLM.from_pretrained("pbelcak/UltraFastBERT-1x11-long")
|
42 |
|
43 |
text = "Replace me by any text you'd like."
|
44 |
encoded_input = tokenizer(text, return_tensors='pt')
|