PLA-Net / README.md
juliocesar-io's picture
Update README.md
e92d89a verified
|
raw
history blame
6.72 kB
metadata
license: mit
language:
  - en
metrics:
  - accuracy
pipeline_tag: graph-ml
tags:
  - chemistry
  - biology
  - medical

PLA-Net

Model Details

The total size of all models is around 55GB.

There are 4 models available:

  • LM: Ligand Module trained on the AD dataset.
  • LM+Advs: Ligand Module trained on the AD dataset with adversarial training.
  • LMPM: Protein Module trained on the AD dataset using the weights of the Ligand Module.
  • PLA-Net: Ligand Module + Protein Module + PLA-Net trained on the AD dataset.

Each of them has 102 targets models with 4 fold cross-validation. The folder structure is the following:

checkpoints/
    LM/
        BINARY_ada/
            Fold1/
                Best_Model.pth
            Fold2/
                Best_Model.pth
            ...
        ...
    LM+Advs/
        ...
    LMPM/
        ...
    PLA-Net/
        ...

Model Description

PLA-Net is a deep learning model designed to predict interactions between small organic molecules (ligands) and any of the 102 target proteins in the Alzheimer's Disease (AD) dataset. By transforming molecular and protein sequences into graph representations, PLA-Net leverages Graph Convolutional Networks (GCNs) to analyze and predict target-ligand interaction probabilities. Developed by BCV-Uniandes.

Key Features

  • Graph-Based Input Representation

    • Ligand Module (LM): Converts SMILES sequences of molecules into graph representations.
    • Protein Module (PM): Transforms FASTA sequences of proteins into graph structures.
  • Deep Graph Convolutional Networks

    • Each module employs a deep GCN followed by an average pooling layer to extract meaningful features from the input graphs.
  • Interaction Prediction

    • The feature representations from the LM and PM are concatenated.
    • A fully connected layer processes the combined features to predict the interaction probability between the ligand and the target protein.
  • Developed by: BCV-Uniandes.

  • Model type: GCNs, Graph Convolutional Networks

  • Language(s) (NLP): Python

  • License: MIT

Model Sources

Docker Install

To prevent conflicts with the host machine, it is recommended to run PLA-Net in a Docker container.

First make sure you have an NVIDIA GPU and NVIDIA Container Toolkit installed. Then build the image with the following command:

docker build -t pla-net:latest .

Inference

To run inference, run the following command:

docker run \
    -it --rm --gpus all \
    -v "$(pwd)":/home/user/output \
    pla-net:latest \
    python /home/user/app/scripts/pla_net_inference.py \
    --use_gpu \
    --target ada \
    --target_list /home/user/app/data/datasets/AD/Targets_Fasta.csv \
    --target_checkpoint_path /home/user/app/pretrained-models/BINARY_ada \
    --input_file_smiles /home/user/app/example/input_smiles.csv \
    --output_file /home/user/output/output_predictions.csv

This will run inference for the target protein ada with the SMILES in the input_smiles.csv file and save the predictions to the output_predictions.csv file.

The prediction file has the following format:

target,smiles,interaction_probability,interaction_class
ada,Cn4c(CCC(=O)Nc3ccc2ccn(CC[C@H](CO)n1cnc(C(N)=O)c1)c2c3)nc5ccccc45,0.9994347542524338,1

Where interaction_class is 1 if the interaction probability is greater than 0.5, and 0 otherwise.

Inference Args:

  • use_gpu: Use GPU for inference.
  • target: Target protein ID from the list of targets. Check the list of available targets in the data folder.
  • target_list: Path to the target list CSV file.
  • target_checkpoint_path: Path to the target checkpoint. (e.g. /workspace/pretrained-models/BINARY_ada) one checkpoint for each target.
  • input_file_smiles: Path to the input SMILES file.
  • output_file: Path to the output predictions file.

Gradio Server

We provide a simple graphical user interface to run PLA-Net with Gradio. To use it, run the following command:

docker run \
    -it --rm --gpus all \
    -p 7860:7860 \
    pla-net:latest \
    python app.py

Then open your browser and go to http://localhost:7860/ to access the web interface.

Local Install

To do inference with PLA-Net, you need to install the dependencies and activate the environment. You can do this by running the following commands:

conda env create -f environment.yml
conda activate pla-net

Now you can run inference with PLA-Net locally. In the project folder, run the following command:

python scripts/pla_net_inference.py \
    --use_gpu \
    --target ada \
    --target_list data/datasets/AD/Targets_Fasta.csv \
    --target_checkpoint_path pretrained-models/BINARY_ada \
    --input_file_smiles example/input_smiles.csv \
    --output_file example/output_predictions.csv

Models

You can download the pre-trained models from Hugging Face.

Training

To train each of the components of our method: LM, LM+Advs, LMPM and PLA-Net please refer to planet.sh file and run the desired models.

To evaluate each of the components of our method: LM, LM+Advs, LMPM and PLA-Net please run the corresponding bash file in the inference folder.

Citation

BibTeX:

@article{ruiz2022predicting,
  title={Predicting target--ligand interactions with graph convolutional networks for interpretable pharmaceutical discovery},
  author={Ruiz Puentes, Paola and Rueda-Gensini, Laura and Valderrama, Natalia and Hern{\'a}ndez, Isabela and Gonz{\'a}lez, Cristina and Daza, Laura and Mu{\~n}oz-Camargo, Carolina and Cruz, Juan C and Arbel{\'a}ez, Pablo},
  journal={Scientific reports},
  volume={12},
  number={1},
  pages={1--17},
  year={2022},
  publisher={Nature Publishing Group}
}

Model Card Authors