add readme, pt model and extra configs
Browse files- README.md +97 -0
- pytorch_model.pt +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +15 -0
README.md
CHANGED
@@ -1,3 +1,100 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- stripedhyena
|
5 |
+
- long context
|
6 |
+
- deep signal processing
|
7 |
+
- hybrid
|
8 |
+
- biology
|
9 |
+
- genomics
|
10 |
---
|
11 |
+
|
12 |
+
|
13 |
+
## Evo-1 (Transposon)
|
14 |
+
|
15 |
+
<p align="center">
|
16 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/JoEHcvLTUlHoMcgh3mmAz.png" width="70%" />
|
17 |
+
</p>
|
18 |
+
|
19 |
+
|
20 |
+
### News
|
21 |
+
|
22 |
+
We identified and fixed an issue related to a wrong permutation of some projections, which affects generation quality. To use the new model revision, please load as follows:
|
23 |
+
|
24 |
+
```python
|
25 |
+
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True, revision="1.1_fix")
|
26 |
+
model = AutoModelForCausalLM.from_pretrained(
|
27 |
+
model_name,
|
28 |
+
config=config,
|
29 |
+
trust_remote_code=True,
|
30 |
+
revision="1.1_fix"
|
31 |
+
)
|
32 |
+
```
|
33 |
+
|
34 |
+
### About
|
35 |
+
|
36 |
+
Evo is a biological foundation model capable of long-context modeling and design.
|
37 |
+
|
38 |
+
Evo uses the [StripedHyena architecture](https://github.com/togethercomputer/stripedhyena) to enable modeling of sequences at a single-nucleotide, byte-level resolution with near-linear scaling of compute and memory relative to context length.
|
39 |
+
Evo has 7 billion parameters and is trained on OpenGenome, a prokaryotic whole-genome dataset containing ~300 billion tokens.
|
40 |
+
|
41 |
+
Technical details about Evo can be found in our preprint and our accompanying blog posts. Evo was collaboratively developed by the [Arc Institute](https://arcinstitute.org/) and TogetherAI.
|
42 |
+
|
43 |
+
As part of our commitment to open science, we release **weights of 15 intermediate pretraining checkpoints** for phase 1 and phase 2 of pretraining. The checkpoints are available as branches of the corresponding HuggingFace repository.
|
44 |
+
|
45 |
+
**Evo-1 (Transposon)** is our fine-tuned model used to generate IS200/605, trained at a context length of 8k.
|
46 |
+
|
47 |
+
| Checkpoint Name | Description |
|
48 |
+
|----------------------------------------|-------------|
|
49 |
+
| `evo-1-8k-base` | A model pretrained with 8,192 context. We use this model as the base model for molecular-scale finetuning tasks. |
|
50 |
+
| `evo-1-131k-base` | A model pretrained with 131,072 context using `evo-1-8k-base` as the initialization. We use this model to reason about and generate sequences at the genome scale. |
|
51 |
+
| `evo-1-8k-crispr` | A model fine-tuned on `evo-1-8k-base` specifically on CRISPR-Cas systems. We use this model to generate Cas9/12/13 systems. |
|
52 |
+
| `evo-1-8k-transposon` | A model fine-tuned on `evo-1-8k-base` specifically on transposons. We use this to generate IS200/IS605. |
|
53 |
+
|
54 |
+
### Model Architecture
|
55 |
+
|
56 |
+
StripedHyena is a deep signal processing, hybrid architecture composed of multi-head attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, improving over decoder-only Transformers.
|
57 |
+
|
58 |
+
StripedHyena is designed to leverage the specialization of each of its layer classes, with Hyena layers implementing the bulk of the computation required for sequence processing and attention layers supplementing the ability to perform targeted pattern recall.
|
59 |
+
|
60 |
+
Some highlights of the architecture:
|
61 |
+
- **Efficient autoregressive generation** via a recurrent mode (>500k generation with a single 80GB GPU)
|
62 |
+
- **Significantly faster training and finetuning** at long context (>3x at 131k)
|
63 |
+
- **Improved scaling laws over state-of-the-art architectures** (e.g., Transformer++) on both natural language and biological sequences.
|
64 |
+
- **Robust to training beyond the compute-optimal frontier** e.g., training way beyond Chinchilla-optimal token amounts (see preprint for details -- more details to come)
|
65 |
+
|
66 |
+
|
67 |
+
### How to use Evo
|
68 |
+
|
69 |
+
Example usage is provided in the [standalone repo](https://github.com/evo-design/evo).
|
70 |
+
|
71 |
+
|
72 |
+
#### Parametrization for Inference and Finetuning
|
73 |
+
|
74 |
+
One of the advantages of deep signal processing models is their flexibility. Different parametrizations of convolutions can be used depending on the memory, expressivity and causality requirements of pretraining, finetuning or inference workloads.
|
75 |
+
|
76 |
+
The main classes are:
|
77 |
+
- Modal canonical: unconstrained poles ([reference](https://arxiv.org/pdf/2203.14343.pdf), [reference](https://arxiv.org/abs/2310.18780)), or constrained poles ([reference](https://arxiv.org/abs/2206.11893), [reference](https://arxiv.org/pdf/2303.06349.pdf)).
|
78 |
+
- Companion canonical / rational: TBA.
|
79 |
+
- Hypernetworks: hypernetwork ([reference](https://arxiv.org/abs/2102.02611)), modulated hypernetwork ([reference](https://arxiv.org/abs/2302.10866)).
|
80 |
+
- Explicit: modulated explicit ([reference](https://arxiv.org/pdf/2210.09298.pdf)).
|
81 |
+
|
82 |
+
StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
### Disclaimer
|
87 |
+
|
88 |
+
To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena).
|
89 |
+
|
90 |
+
## Cite
|
91 |
+
|
92 |
+
```
|
93 |
+
@article{nguyen2024sequence,
|
94 |
+
author = {Eric Nguyen and Michael Poli and Matthew G. Durrant and Armin W. Thomas and Brian Kang and Jeremy Sullivan and Madelena Y. Ng and Ashley Lewis and Aman Patel and Aaron Lou and Stefano Ermon and Stephen A. Baccus and Tina Hernandez-Boussard and Christopher Ré and Patrick D. Hsu and Brian L. Hie},
|
95 |
+
journal = {Arc Institute manuscripts},
|
96 |
+
title = {Sequence modeling and design from molecular to genome scale with Evo},
|
97 |
+
url = {https://arcinstitute.org/manuscripts/Evo},
|
98 |
+
year = {2024},
|
99 |
+
}
|
100 |
+
```
|
pytorch_model.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:46a6ff20c9ea1979868907abd8d6861238b290a4a9278454f87f51ada17400f2
|
3 |
+
size 16814399082
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{}
|
tokenizer_config.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {},
|
3 |
+
"auto_map": {
|
4 |
+
"AutoTokenizer": [
|
5 |
+
"tokenizer.ByteTokenizer",
|
6 |
+
null
|
7 |
+
]
|
8 |
+
},
|
9 |
+
"byte_level": true,
|
10 |
+
"clean_up_tokenization_spaces": true,
|
11 |
+
"model_max_length": 1000000000000000019884624838656,
|
12 |
+
"padding_side": "left",
|
13 |
+
"truncation_side": "left"
|
14 |
+
}
|
15 |
+
|