SAELens
ArthurConmyGDM commited on
Commit
d9f8eb2
1 Parent(s): e1d9dc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -18
README.md CHANGED
@@ -2,29 +2,19 @@
2
  license: apache-2.0
3
  ---
4
 
5
- # 1. GemmaScope
6
 
7
- Gemmascope is TODO
8
 
9
- # 2. What Is `gemmascope-2b-pt-mlp`?
10
 
11
- - `gemmascope-`: See 1.
12
- - `2b-pt-`: these SAEs were trained on the Gemma v2 2B base model (TODO link)
13
- - `mlp`: These SAEs were trained on MLP outputs
14
 
15
- ## 3. GTM FAQ (TODO(conmy): delete for main rollout)
 
 
16
 
17
- Q1: Why does this model exist in `gg-hf`?
18
-
19
- A1: See https://docs.google.com/document/d/1bKaOw2mJPJDYhgFQGGVOyBB3M4Bm_Q3PMrfQeqeYi0M (Google internal only).
20
-
21
- Q2: What does "SAE" mean?
22
-
23
- A2: Sparse Autoencoder. See https://docs.google.com/document/d/1roMgCPMPEQgaNbCu15CGo966xRLToulCBQUVKVGvcfM (should be available to trusted HuggingFace collaborators, and Google too).
24
-
25
- TODO(conmy): remove this when making the main repo.
26
-
27
- ## 4. Point of Contact
28
 
29
  Point of contact: Arthur Conmy
30
 
 
2
  license: apache-2.0
3
  ---
4
 
5
+ # 1. Gemma Scope
6
 
7
+ Gemma Scope is a comprehensive, open suite of Sparse Autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
8
 
9
+ See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs:
10
 
11
+ # 2. What Is `gemma-scope-2b-pt-mlp`?
 
 
12
 
13
+ - `gemma-scope-`: See 1.
14
+ - `2b-pt-`: These SAEs were trained on Gemma v2 2B base model.
15
+ - `mlp`: These SAEs were trained on the MLP sublayer outputs.
16
 
17
+ ## 3. Point of Contact
 
 
 
 
 
 
 
 
 
 
18
 
19
  Point of contact: Arthur Conmy
20