ArthurConmyGDM
commited on
Commit
•
be232c6
1
Parent(s):
a93866d
Update README.md
Browse files
README.md
CHANGED
@@ -1,30 +1,20 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
---
|
4 |
|
5 |
-
# 1.
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
-
- `9b-pt-`: These SAEs were trained on the Gemma v2 9B base model (TODO link).
|
13 |
-
- `att`: These SAEs were trained on the attention layer outputs, before the final linear projection (TODO link ckkissane post).
|
14 |
|
15 |
-
|
|
|
|
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
A1: See https://docs.google.com/document/d/1bKaOw2mJPJDYhgFQGGVOyBB3M4Bm_Q3PMrfQeqeYi0M (Google internal only).
|
20 |
-
|
21 |
-
Q2: What does "SAE" mean?
|
22 |
-
|
23 |
-
A2: Sparse Autoencoder. See https://docs.google.com/document/d/1roMgCPMPEQgaNbCu15CGo966xRLToulCBQUVKVGvcfM (should be available to trusted HuggingFace collaborators, and Google too).
|
24 |
-
|
25 |
-
TODO(conmy): remove this when making the main repo.
|
26 |
-
|
27 |
-
## 4. Point of Contact
|
28 |
|
29 |
Point of contact: Arthur Conmy
|
30 |
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
---
|
4 |
|
5 |
+
# 1. Gemma Scope
|
6 |
|
7 |
+
Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
|
8 |
|
9 |
+
See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs:
|
10 |
|
11 |
+
# 2. What Is `gemma-scope-9b-pt-att`?
|
|
|
|
|
12 |
|
13 |
+
- `gemma-scope-`: See 1.
|
14 |
+
- `9b-pt-`: These SAEs were trained on Gemma v2 27B base model.
|
15 |
+
- `att`: These SAEs were trained on the model's attention layer output before the linear projection.
|
16 |
|
17 |
+
## 3. Point of Contact
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
Point of contact: Arthur Conmy
|
20 |
|