Upload abstract/2010.05315.txt with huggingface_hub
Browse files- abstract/2010.05315.txt +1 -0
abstract/2010.05315.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
"We propose a novel type of balanced clustering algorithm to approximate attention. Attention complexity is reduced from O of N squared to O of N log N, where N is the sequence length. Our algorithm, SMYRF, uses Locality Sensitive Hashing (LSH) in a novel way by defining new Asymmetric transformations and an adaptive scheme that produces balanced clusters. The biggest advantage of SMYRF is that it can be used as a drop-in replacement for dense attention layers without any retraining. On the contrary, prior fast attention methods impose constraints (e.g. queries and keys share the same vector representations) and require re-training from scratch. We apply our method to pre-trained state-of-the-art Natural Language Processing and Computer Vision models and we report significant memory and speed benefits. Notably, SMYRF-BERT outperforms slightly BERT on GLUE, while using fifty percent less memory. We also show that SMYRF can be used interchangeably with dense attention before and after training. Finally, we use SMYRF to train GANs with attention in high resolutions. Using a single TPU, we were able to scale attention to 128 by 128 equals sixteen thousand and 256 by 256 equals sixty-five thousand tokens on BigGAN on CelebA-HQ."
|