|
--- |
|
license: mit |
|
--- |
|
This directory contains sparse autoencoders trained on activations at various points within gpt2-small using [Neel Nanda's open source code](https://github.com/neelnanda-io/1L-Sparse-Autoencoder). |
|
Each autoencoder was trained on 1B tokens from OpenWebText. |
|
A demo colab notebook is [here](https://colab.research.google.com/drive/1KeRGixXf_5GrG_7vQalG6UJQyhQd6byi?usp=sharing). |
|
|
|
|
|
The autoencoders are named "gpt2-small_{feature_dict_size}_{point} _{layer}.pt", where: |
|
- "feature_dict_size" is the number of hidden neurons in the autoencoder |
|
- "point" is either "mlp_out" or "resid_pre" |
|
- "layer" is an integer from 0,...,11. |