Post
3025
๐ ๐๐ฅ๐-๐ฌ๐๐ก๐จ๐จ๐ฅ ๐๐๐๐ฌ ๐๐๐ง ๐๐๐ญ๐ฎ๐๐ฅ๐ฅ๐ฒ ๐ซ๐ข๐ฏ๐๐ฅ ๐๐๐ง๐๐ฒ ๐ญ๐ซ๐๐ง๐ฌ๐๐จ๐ซ๐ฆ๐๐ซ๐ฌ!
Researchers from Mila and Borealis AI just have shown that simplified versions of good old Recurrent Neural Networks (RNNs) can match the performance of today's transformers.
They took a fresh look at LSTMs (from 1997!) and GRUs (from 2014). They stripped these models down to their bare essentials, creating "minLSTM" and "minGRU". The key changes:
โถ Removed dependencies on previous hidden states in the gates
โท Dropped the tanh that had been added to restrict output range in order to avoid vanishing gradients
โธ Ensured outputs are time-independent in scale (not sure I understood that well either, don't worry)
โก๏ธ As a result, you can use a โparallel scanโ algorithm to train these new, minimal RNNs, in parallel, taking 88% more memory but also making them 200x faster than their traditional counterparts for long sequences
๐ฅ The results are mind-blowing! Performance-wise, they go toe-to-toe with Transformers or Mamba.
And for Language Modeling, they need 2.5x fewer training steps than Transformers to reach the same performance! ๐
๐ค Why does this matter?
By showing there are simpler models with similar performance to transformers, this challenges the narrative that we need advanced architectures for better performance!
๐ฌย Franรงois Chollet wrote in a tweet about this paper:
โThe fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren't fundamentally important in the curve-fitting paradigm (aka deep learning)โ
โCurve-fitting is about embedding a dataset on a curve. The critical factor is the dataset, not the specific hard-coded bells and whistles that constrain the curve's shape.โ
Itโs the Bitter lesson by Rich Sutton striking again: donโt need fancy thinking architectures, just scale up your model and data!
Read the paper ๐ย Were RNNs All We Needed? (2410.01201)
Researchers from Mila and Borealis AI just have shown that simplified versions of good old Recurrent Neural Networks (RNNs) can match the performance of today's transformers.
They took a fresh look at LSTMs (from 1997!) and GRUs (from 2014). They stripped these models down to their bare essentials, creating "minLSTM" and "minGRU". The key changes:
โถ Removed dependencies on previous hidden states in the gates
โท Dropped the tanh that had been added to restrict output range in order to avoid vanishing gradients
โธ Ensured outputs are time-independent in scale (not sure I understood that well either, don't worry)
โก๏ธ As a result, you can use a โparallel scanโ algorithm to train these new, minimal RNNs, in parallel, taking 88% more memory but also making them 200x faster than their traditional counterparts for long sequences
๐ฅ The results are mind-blowing! Performance-wise, they go toe-to-toe with Transformers or Mamba.
And for Language Modeling, they need 2.5x fewer training steps than Transformers to reach the same performance! ๐
๐ค Why does this matter?
By showing there are simpler models with similar performance to transformers, this challenges the narrative that we need advanced architectures for better performance!
๐ฌย Franรงois Chollet wrote in a tweet about this paper:
โThe fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren't fundamentally important in the curve-fitting paradigm (aka deep learning)โ
โCurve-fitting is about embedding a dataset on a curve. The critical factor is the dataset, not the specific hard-coded bells and whistles that constrain the curve's shape.โ
Itโs the Bitter lesson by Rich Sutton striking again: donโt need fancy thinking architectures, just scale up your model and data!
Read the paper ๐ย Were RNNs All We Needed? (2410.01201)