Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ All the models have a comparable model size between 90 MB and 150 MB, BPE tokeni
|
|
31 |
|
32 |
The model is trained based on the State-Space Mamba-130m model with modified tokenizer specific for DNA sequence.
|
33 |
|
34 |
-
This model is fine-tuned for predicting open chromatin.
|
35 |
|
36 |
### How to use
|
37 |
|
|
|
31 |
|
32 |
The model is trained based on the State-Space Mamba-130m model with modified tokenizer specific for DNA sequence.
|
33 |
|
34 |
+
This model is fine-tuned for predicting open chromatin for three-classes (0: Not open chromatin, 1: Full open chromatin, 2: Partial open chromatin).
|
35 |
|
36 |
### How to use
|
37 |
|