File size: 443 Bytes
212f2d2 |
1 2 3 4 5 6 7 |
---
language: en
---
# 85% Sparse BERT-Large (uncased) Prune OFA
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |