Update README.md
Browse files
README.md
CHANGED
@@ -1,77 +1,90 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-sa-4.0
|
3 |
-
tags:
|
4 |
-
- Helical
|
5 |
-
- RNA
|
6 |
-
- Transformers
|
7 |
-
- Sequence
|
8 |
-
- biology
|
9 |
-
- mrna
|
10 |
-
- rna
|
11 |
-
- genomics
|
12 |
-
library_name: transformers
|
13 |
-
---
|
14 |
-
# Helix-mRNA-v0
|
15 |
-
|
16 |
-
Helix-mRNA emerges as a hybrid state-space and transformer based model, leveraging both the efficient sequence processing capabilities of Mamba2's state-space architecture and the contextual understanding of transformer attention mechanisms, allowing for the best of both worlds between these two approaches. These traits make it particularly suitable for studying full-length transcripts, splice variants, and complex mRNA structural elements.
|
17 |
-
|
18 |
-
We tokenize mRNA sequences at single-nucleotide resolution by mapping each nucleotide (A, C, U, G) and ambiguous base (N) to a unique integer. A further special character E is incorporated into the sequence, denoting the start of each codon. This fine-grained approach maximizes the model's ability to extract patterns from the sequences. Unlike coarser tokenization methods that might group nucleotides together or use k-mer based approaches, our single-nucleotide resolution preserves the full sequential information of the mRNA molecule. This simple yet effective encoding scheme ensures that no information is lost during the preprocessing stage, allowing the downstream model to learn directly from the raw sequence composition.
|
19 |
-
|
20 |
-
<p align="center">
|
21 |
-
<img src="assets/results_graph.png" alt="bar_charts" width="750"/>
|
22 |
-
<figcaption align="center">Helix-mRNA benchmark comparison against Transformer HELM, Transformer XE and CodonBERT.</figcaption>
|
23 |
-
</p>
|
24 |
-
|
25 |
-
Read more about it in our <a target="_blank">[blog post](https://www.helical-ai.com/blog/helix-mrna-v0)</a>!
|
26 |
-
|
27 |
-
# Helical<a name="helical"></a>
|
28 |
-
|
29 |
-
#### Install the package
|
30 |
-
|
31 |
-
Run the following to install the <a target="_blank">[Helical](https://github.com/helicalAI/helical)</a> package via pip:
|
32 |
-
```console
|
33 |
-
pip install --upgrade helical
|
34 |
-
```
|
35 |
-
|
36 |
-
#### Generate Embeddings
|
37 |
-
```python
|
38 |
-
from helical import HelixmRNA, HelixmRNAConfig
|
39 |
-
import torch
|
40 |
-
|
41 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
42 |
-
|
43 |
-
input_sequences = ["EACU"*20, "EAUG"*20, "EAUG"*20, "EACU"*20, "EAUU"*20]
|
44 |
-
|
45 |
-
helix_mrna_config = HelixmRNAConfig(batch_size=5, device=device, max_length=100)
|
46 |
-
helix_mrna = HelixmRNA(configurer=helix_mrna_config)
|
47 |
-
|
48 |
-
# prepare data for input to the model
|
49 |
-
processed_input_data = helix_mrna.process_data(input_sequences)
|
50 |
-
|
51 |
-
# generate the embeddings for the processed data
|
52 |
-
embeddings = helix_mrna.get_embeddings(processed_input_data)
|
53 |
-
```
|
54 |
-
|
55 |
-
#### Fine-Tuning
|
56 |
-
Classification fine-tuning example:
|
57 |
-
```python
|
58 |
-
from helical import HelixmRNAFineTuningModel, HelixmRNAConfig
|
59 |
-
import torch
|
60 |
-
|
61 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
62 |
-
|
63 |
-
input_sequences = ["EACU"*20, "EAUG"*20, "EAUG"*20, "EACU"*20, "EAUU"*20]
|
64 |
-
labels = [0, 2, 2, 0, 1]
|
65 |
-
|
66 |
-
helixr_config = HelixmRNAConfig(batch_size=5, device=device, max_length=100)
|
67 |
-
helixr_fine_tune = HelixmRNAFineTuningModel(helix_mrna_config=helixr_config, fine_tuning_head="classification", output_size=3)
|
68 |
-
|
69 |
-
# prepare data for input to the model
|
70 |
-
train_dataset = helixr_fine_tune.process_data(input_sequences)
|
71 |
-
|
72 |
-
# fine-tune the model with the relevant training labels
|
73 |
-
helixr_fine_tune.train(train_dataset=train_dataset, train_labels=labels)
|
74 |
-
|
75 |
-
# get outputs from the fine-tuned model on a processed dataset
|
76 |
-
outputs = helixr_fine_tune.get_outputs(train_dataset)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
```
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
+
tags:
|
4 |
+
- Helical
|
5 |
+
- RNA
|
6 |
+
- Transformers
|
7 |
+
- Sequence
|
8 |
+
- biology
|
9 |
+
- mrna
|
10 |
+
- rna
|
11 |
+
- genomics
|
12 |
+
library_name: transformers
|
13 |
+
---
|
14 |
+
# Helix-mRNA-v0
|
15 |
+
|
16 |
+
Helix-mRNA emerges as a hybrid state-space and transformer based model, leveraging both the efficient sequence processing capabilities of Mamba2's state-space architecture and the contextual understanding of transformer attention mechanisms, allowing for the best of both worlds between these two approaches. These traits make it particularly suitable for studying full-length transcripts, splice variants, and complex mRNA structural elements.
|
17 |
+
|
18 |
+
We tokenize mRNA sequences at single-nucleotide resolution by mapping each nucleotide (A, C, U, G) and ambiguous base (N) to a unique integer. A further special character E is incorporated into the sequence, denoting the start of each codon. This fine-grained approach maximizes the model's ability to extract patterns from the sequences. Unlike coarser tokenization methods that might group nucleotides together or use k-mer based approaches, our single-nucleotide resolution preserves the full sequential information of the mRNA molecule. This simple yet effective encoding scheme ensures that no information is lost during the preprocessing stage, allowing the downstream model to learn directly from the raw sequence composition.
|
19 |
+
|
20 |
+
<p align="center">
|
21 |
+
<img src="assets/results_graph.png" alt="bar_charts" width="750"/>
|
22 |
+
<figcaption align="center">Helix-mRNA benchmark comparison against Transformer HELM, Transformer XE and CodonBERT.</figcaption>
|
23 |
+
</p>
|
24 |
+
|
25 |
+
Read more about it in our <a target="_blank">[blog post](https://www.helical-ai.com/blog/helix-mrna-v0)</a>!
|
26 |
+
|
27 |
+
# Helical<a name="helical"></a>
|
28 |
+
|
29 |
+
#### Install the package
|
30 |
+
|
31 |
+
Run the following to install the <a target="_blank">[Helical](https://github.com/helicalAI/helical)</a> package via pip:
|
32 |
+
```console
|
33 |
+
pip install --upgrade helical
|
34 |
+
```
|
35 |
+
|
36 |
+
#### Generate Embeddings
|
37 |
+
```python
|
38 |
+
from helical import HelixmRNA, HelixmRNAConfig
|
39 |
+
import torch
|
40 |
+
|
41 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
42 |
+
|
43 |
+
input_sequences = ["EACU"*20, "EAUG"*20, "EAUG"*20, "EACU"*20, "EAUU"*20]
|
44 |
+
|
45 |
+
helix_mrna_config = HelixmRNAConfig(batch_size=5, device=device, max_length=100)
|
46 |
+
helix_mrna = HelixmRNA(configurer=helix_mrna_config)
|
47 |
+
|
48 |
+
# prepare data for input to the model
|
49 |
+
processed_input_data = helix_mrna.process_data(input_sequences)
|
50 |
+
|
51 |
+
# generate the embeddings for the processed data
|
52 |
+
embeddings = helix_mrna.get_embeddings(processed_input_data)
|
53 |
+
```
|
54 |
+
|
55 |
+
#### Fine-Tuning
|
56 |
+
Classification fine-tuning example:
|
57 |
+
```python
|
58 |
+
from helical import HelixmRNAFineTuningModel, HelixmRNAConfig
|
59 |
+
import torch
|
60 |
+
|
61 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
62 |
+
|
63 |
+
input_sequences = ["EACU"*20, "EAUG"*20, "EAUG"*20, "EACU"*20, "EAUU"*20]
|
64 |
+
labels = [0, 2, 2, 0, 1]
|
65 |
+
|
66 |
+
helixr_config = HelixmRNAConfig(batch_size=5, device=device, max_length=100)
|
67 |
+
helixr_fine_tune = HelixmRNAFineTuningModel(helix_mrna_config=helixr_config, fine_tuning_head="classification", output_size=3)
|
68 |
+
|
69 |
+
# prepare data for input to the model
|
70 |
+
train_dataset = helixr_fine_tune.process_data(input_sequences)
|
71 |
+
|
72 |
+
# fine-tune the model with the relevant training labels
|
73 |
+
helixr_fine_tune.train(train_dataset=train_dataset, train_labels=labels)
|
74 |
+
|
75 |
+
# get outputs from the fine-tuned model on a processed dataset
|
76 |
+
outputs = helixr_fine_tune.get_outputs(train_dataset)
|
77 |
+
```
|
78 |
+
|
79 |
+
```bibtex
|
80 |
+
@software{allard_2024_13135902,
|
81 |
+
author = {Helical Team},
|
82 |
+
title = {helicalAI/helical: v0.0.1-alpha10},
|
83 |
+
month = nov,
|
84 |
+
year = 2024,
|
85 |
+
publisher = {Zenodo},
|
86 |
+
version = {0.0.1a10},
|
87 |
+
doi = {10.5281/zenodo.13135902},
|
88 |
+
url = {https://doi.org/10.5281/zenodo.13135902}
|
89 |
+
}
|
90 |
```
|