Spaces:
Runtime error
Runtime error
gokceuludogan
commited on
Commit
•
7be9380
1
Parent(s):
f7fc750
Update About.py
Browse files
About.py
CHANGED
@@ -24,7 +24,7 @@ def run():
|
|
24 |
layout='wide'
|
25 |
)
|
26 |
|
27 |
-
st.write("## [Exploiting Pretrained Biochemical Language Models for Targeted Drug Design](https://
|
28 |
#st.sidebar.title("Model Demos")
|
29 |
st.sidebar.success("Select a model demo above.")
|
30 |
|
@@ -48,7 +48,7 @@ def run():
|
|
48 |
biochemical language models to initialize (i.e. warm start) targeted molecule generation models. We investigate
|
49 |
two warm start strategies: (i) a one-stage strategy where the initialized model is trained on targeted molecule generation
|
50 |
and (ii) a two-stage strategy containing a pre-finetuning on molecular generation followed by target-specific training. We
|
51 |
-
also compare two decoding strategies to generate compounds:
|
52 |
|
53 |
**Results:** The results show that the warm-started models perform better than a baseline model trained from scratch.
|
54 |
The two proposed warm-start strategies achieve similar results to each other with respect to widely used metrics
|
@@ -60,13 +60,21 @@ def run():
|
|
60 |
**Availability and implementation:** The source code is available at https://github.com/boun-tabi/biochemical-lms-for-drug-design and the materials (i.e., data, models, and outputs) are archived in Zenodo at https://doi.org/10.5281/zenodo.6832145.
|
61 |
### Citation
|
62 |
```bibtex
|
63 |
-
@
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
}
|
71 |
```
|
72 |
"""
|
|
|
24 |
layout='wide'
|
25 |
)
|
26 |
|
27 |
+
st.write("## [Exploiting Pretrained Biochemical Language Models for Targeted Drug Design](https://arxiv.org/abs/2209.00981)")
|
28 |
#st.sidebar.title("Model Demos")
|
29 |
st.sidebar.success("Select a model demo above.")
|
30 |
|
|
|
48 |
biochemical language models to initialize (i.e. warm start) targeted molecule generation models. We investigate
|
49 |
two warm start strategies: (i) a one-stage strategy where the initialized model is trained on targeted molecule generation
|
50 |
and (ii) a two-stage strategy containing a pre-finetuning on molecular generation followed by target-specific training. We
|
51 |
+
also compare two decoding strategies to generate compounds: beam search and sampling.
|
52 |
|
53 |
**Results:** The results show that the warm-started models perform better than a baseline model trained from scratch.
|
54 |
The two proposed warm-start strategies achieve similar results to each other with respect to widely used metrics
|
|
|
60 |
**Availability and implementation:** The source code is available at https://github.com/boun-tabi/biochemical-lms-for-drug-design and the materials (i.e., data, models, and outputs) are archived in Zenodo at https://doi.org/10.5281/zenodo.6832145.
|
61 |
### Citation
|
62 |
```bibtex
|
63 |
+
@misc{https://doi.org/10.48550/arxiv.2209.00981,
|
64 |
+
doi = {10.48550/ARXIV.2209.00981},
|
65 |
+
|
66 |
+
url = {https://arxiv.org/abs/2209.00981},
|
67 |
+
|
68 |
+
author = {Uludoğan, Gökçe and Ozkirimli, Elif and Ulgen, Kutlu O. and Karalı, Nilgün and Özgür, Arzucan},
|
69 |
+
|
70 |
+
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Biomolecules (q-bio.BM), Quantitative Methods (q-bio.QM), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Biological sciences, FOS: Biological sciences},
|
71 |
+
|
72 |
+
title = {Exploiting Pretrained Biochemical Language Models for Targeted Drug Design},
|
73 |
+
|
74 |
+
publisher = {arXiv},
|
75 |
+
|
76 |
+
year = {2022}
|
77 |
+
|
78 |
}
|
79 |
```
|
80 |
"""
|