Push model using huggingface_hub.
Browse files- README.md +6 -2
- model.safetensors +1 -1
README.md
CHANGED
@@ -6,16 +6,20 @@ tags:
|
|
6 |
- arxiv:2409.04185
|
7 |
- model_hub_mixin
|
8 |
- pytorch_model_hub_mixin
|
9 |
-
base_model: openai-community/gpt2
|
10 |
---
|
11 |
|
12 |
-
# Model Card for
|
13 |
|
14 |
A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream activation
|
15 |
vectors from [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) with an
|
16 |
expansion factor of R = 64 and sparsity k = 32, over 1 billion
|
17 |
tokens from [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
18 |
|
|
|
|
|
|
|
|
|
|
|
19 |
### Model Sources
|
20 |
|
21 |
- **Repository:** <https://github.com/tim-lawson/mlsae>
|
|
|
6 |
- arxiv:2409.04185
|
7 |
- model_hub_mixin
|
8 |
- pytorch_model_hub_mixin
|
|
|
9 |
---
|
10 |
|
11 |
+
# Model Card for tim-lawson/mlsae-gpt2-x64-k32-tfm
|
12 |
|
13 |
A Multi-Layer Sparse Autoencoder (MLSAE) trained on the residual stream activation
|
14 |
vectors from [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) with an
|
15 |
expansion factor of R = 64 and sparsity k = 32, over 1 billion
|
16 |
tokens from [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
17 |
|
18 |
+
|
19 |
+
This model is a PyTorch Lightning MLSAETransformer module, which includes the underlying
|
20 |
+
transformer.
|
21 |
+
|
22 |
+
|
23 |
### Model Sources
|
24 |
|
25 |
- **Repository:** <https://github.com/tim-lawson/mlsae>
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 799770304
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1248a88fb1317b168c0e8464b994f872be0c54215052b08439c902ba338f2dc
|
3 |
size 799770304
|