jbloom commited on
Commit
f984d27
·
verified ·
1 Parent(s): 50c3988

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ OpenAI's GPT2-Small SAEs reformatted for easy loading from SAE Lens.
6
+
7
+ Links
8
+ - [Paper](https://cdn.openai.com/papers/sparse-autoencoders.pdf)
9
+ - [Original File Loading](https://github.com/openai/sparse_autoencoder/blob/lg-training/sparse_autoencoder/paths.py)
10
+
11
+ ```python
12
+ import torch
13
+ from transformer_lens import HookedTransformer
14
+ from sae_lens import SAE, ActivationsStore
15
+
16
+ torch.set_grad_enabled(False)
17
+ model = HookedTransformer.from_pretrained("gpt2-small")
18
+ sae, cfg, sparsity = SAE.from_pretrained(
19
+ "gpt2-small-resid-post-v5-32k", # to see the list of available releases, go to: https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
20
+ "blocks.11.hook_resid_post" # change this to another specific SAE ID in the release if desired.
21
+ )
22
+
23
+ # For loading activations or tokens from the training dataset.
24
+ activation_store = ActivationsStore.from_sae(
25
+ model=model,
26
+ sae=sae,
27
+ streaming=True,
28
+ # fairly conservative parameters here so can use same for larger
29
+ # models without running out of memory.
30
+ store_batch_size_prompts=8,
31
+ train_batch_size_tokens=4096,
32
+ n_batches_in_buffer=4,
33
+ device=device,
34
+ )
35
+
36
+ ```