|
--- |
|
language: en |
|
license: apache-2.0 |
|
tags: |
|
- fill-mask |
|
datasets: |
|
- wikipedia |
|
- bookcorpus |
|
--- |
|
# 90% Sparse BERT-Large (uncased) Prune OFA |
|
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021. |
|
|
|
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |