gokceuludogan commited on
Commit
7be9380
1 Parent(s): f7fc750

Update About.py

Browse files
Files changed (1) hide show
  1. About.py +17 -9
About.py CHANGED
@@ -24,7 +24,7 @@ def run():
24
  layout='wide'
25
  )
26
 
27
- st.write("## [Exploiting Pretrained Biochemical Language Models for Targeted Drug Design](https://doi.org/10.1093/bioinformatics/btac482)")
28
  #st.sidebar.title("Model Demos")
29
  st.sidebar.success("Select a model demo above.")
30
 
@@ -48,7 +48,7 @@ def run():
48
  biochemical language models to initialize (i.e. warm start) targeted molecule generation models. We investigate
49
  two warm start strategies: (i) a one-stage strategy where the initialized model is trained on targeted molecule generation
50
  and (ii) a two-stage strategy containing a pre-finetuning on molecular generation followed by target-specific training. We
51
- also compare two decoding strategies to generate compounds: beamsearch and sampling.
52
 
53
  **Results:** The results show that the warm-started models perform better than a baseline model trained from scratch.
54
  The two proposed warm-start strategies achieve similar results to each other with respect to widely used metrics
@@ -60,13 +60,21 @@ def run():
60
  **Availability and implementation:** The source code is available at https://github.com/boun-tabi/biochemical-lms-for-drug-design and the materials (i.e., data, models, and outputs) are archived in Zenodo at https://doi.org/10.5281/zenodo.6832145.
61
  ### Citation
62
  ```bibtex
63
- @article{10.1093/bioinformatics/btac482,
64
- author = {Uludoğan, Gökçe and Ozkirimli, Elif and Ulgen, Kutlu O. and Karalı, Nilgün Lütfiye and Özgür, Arzucan},
65
- title = "{Exploiting Pretrained Biochemical Language Models for Targeted Drug Design}",
66
- journal = {Bioinformatics},
67
- year = {2022},
68
- doi = {10.1093/bioinformatics/btac482},
69
- url = {https://doi.org/10.1093/bioinformatics/btac482}
 
 
 
 
 
 
 
 
70
  }
71
  ```
72
  """
 
24
  layout='wide'
25
  )
26
 
27
+ st.write("## [Exploiting Pretrained Biochemical Language Models for Targeted Drug Design](https://arxiv.org/abs/2209.00981)")
28
  #st.sidebar.title("Model Demos")
29
  st.sidebar.success("Select a model demo above.")
30
 
 
48
  biochemical language models to initialize (i.e. warm start) targeted molecule generation models. We investigate
49
  two warm start strategies: (i) a one-stage strategy where the initialized model is trained on targeted molecule generation
50
  and (ii) a two-stage strategy containing a pre-finetuning on molecular generation followed by target-specific training. We
51
+ also compare two decoding strategies to generate compounds: beam search and sampling.
52
 
53
  **Results:** The results show that the warm-started models perform better than a baseline model trained from scratch.
54
  The two proposed warm-start strategies achieve similar results to each other with respect to widely used metrics
 
60
  **Availability and implementation:** The source code is available at https://github.com/boun-tabi/biochemical-lms-for-drug-design and the materials (i.e., data, models, and outputs) are archived in Zenodo at https://doi.org/10.5281/zenodo.6832145.
61
  ### Citation
62
  ```bibtex
63
+ @misc{https://doi.org/10.48550/arxiv.2209.00981,
64
+ doi = {10.48550/ARXIV.2209.00981},
65
+
66
+ url = {https://arxiv.org/abs/2209.00981},
67
+
68
+ author = {Uludoğan, Gökçe and Ozkirimli, Elif and Ulgen, Kutlu O. and Karalı, Nilgün and Özgür, Arzucan},
69
+
70
+ keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Biomolecules (q-bio.BM), Quantitative Methods (q-bio.QM), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Biological sciences, FOS: Biological sciences},
71
+
72
+ title = {Exploiting Pretrained Biochemical Language Models for Targeted Drug Design},
73
+
74
+ publisher = {arXiv},
75
+
76
+ year = {2022}
77
+
78
  }
79
  ```
80
  """