Update README.md
Browse files
README.md
CHANGED
@@ -11,17 +11,20 @@ These SAEs were trained with [SAE Lens](https://github.com/jbloomAus/SAELens) an
|
|
11 |
|
12 |
All training hyperparameters are specified in cfg.json.
|
13 |
|
14 |
-
They are loadable using SAE via a few methods.
|
15 |
|
16 |
```python
|
17 |
import torch
|
18 |
-
from
|
|
|
19 |
|
20 |
torch.set_grad_enabled(False)
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
24 |
)
|
|
|
25 |
```
|
26 |
|
27 |
## Resid Post 0
|
|
|
11 |
|
12 |
All training hyperparameters are specified in cfg.json.
|
13 |
|
14 |
+
They are loadable using SAE via a few methods. The preferred method is to use the following:
|
15 |
|
16 |
```python
|
17 |
import torch
|
18 |
+
from transformer_lens import HookedTransformer
|
19 |
+
from sae_lens import SparseAutoencoder, ActivationsStore
|
20 |
|
21 |
torch.set_grad_enabled(False)
|
22 |
+
model = HookedTransformer.from_pretrained("gemma-2b")
|
23 |
+
sparse_autoencoder = SparseAutoencoder.from_pretrained(
|
24 |
+
"gemma-2b-res-jb", # to see the list of available releases, go to: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
|
25 |
+
"blocks.0.hook_resid_post" # change this to another specific SAE ID in the release if desired.
|
26 |
)
|
27 |
+
activation_store = ActivationsStore.from_config(model, sparse_autoencoder.cfg)
|
28 |
```
|
29 |
|
30 |
## Resid Post 0
|