SAELens
gemma-scope / README.md
nielsr's picture
nielsr HF staff
Link model to paper
902591a verified
|
raw
history blame
5.13 kB
metadata
license: cc-by-4.0
library_name: saelens
tags:
  - arxiv:2408.05147

Gemma Scope:

This is a landing page for Gemma Scope, a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.

There are no model weights in this repo. If you are looking for them, please visit one of our repos:

This tutorial has instructions on how to load the SAEs.

Key links:

Full weight set:

The full list of SAEs we trained at which sites and layers are linked from the following table, adapted from Figure 1 of our technical report:

Gemma 2 Model SAE Width Attention MLP Residual Tokens
2.6B PT
(26 layers)
2^14 ≈ 16.4K All All All 4B
2^15 {12} 8B
2^16 All All All 8B
2^17 {12} 8B
2^18 {12} 8B
2^19 {12} 8B
2^20 ≈ 1M {5, 12, 19} 16B
9B PT
(42 layers)
2^14 All All All 4B
2^15 {20} 8B
2^16 {20} 8B
2^17 All All All 8B
2^18 {20} 8B
2^19 {20} 8B
2^20 {9, 20, 31} 16B
27B PT
(46 layers)
2^17 {10, 22, 34} 8B

Which SAE is in the Neuronpedia demo?

https://huggingface.co/google/gemma-scope-2b-pt-res/tree/main/layer_20/width_16k/average_l0_71