Instructions to use DongkiKim/Mol-Llama-3.1-8B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DongkiKim/Mol-Llama-3.1-8B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DongkiKim/Mol-Llama-3.1-8B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import MolLLaMA model = MolLLaMA.from_pretrained("DongkiKim/Mol-Llama-3.1-8B-Instruct", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DongkiKim/Mol-Llama-3.1-8B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DongkiKim/Mol-Llama-3.1-8B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DongkiKim/Mol-Llama-3.1-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DongkiKim/Mol-Llama-3.1-8B-Instruct
- SGLang
How to use DongkiKim/Mol-Llama-3.1-8B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DongkiKim/Mol-Llama-3.1-8B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DongkiKim/Mol-Llama-3.1-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DongkiKim/Mol-Llama-3.1-8B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DongkiKim/Mol-Llama-3.1-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use DongkiKim/Mol-Llama-3.1-8B-Instruct with Docker Model Runner:
docker model run hf.co/DongkiKim/Mol-Llama-3.1-8B-Instruct
Mol-Llama-3.1-8B-Instruct
[Project Page] [Paper] [GitHub]
This repo contains the weights of Mol-LLaMA including the LoRA weights and projectors, based on meta-llama/Llama-3.1-8B-Instruct.
Architecture
- Molecular encoders: Pretrained 2D encoder (MoleculeSTM) and 3D encoder (Uni-Mol)
- Blending Module: Combining complementary information from 2D and 3D encoders via cross-attention
- Q-Former: Embed molecular representations into query tokens based on SciBERT
- LoRA: Adapters for fine-tuning LLMs
Training Dataset
Mol-LLaMA is trained on Mol-LLaMA-Instruct, to learn the fundamental characteristics of molecules with the reasoning ability and explanbility.
How to Use
Please check out the exemplar code for inference in the Github repo.
Citation
If you find our model useful, please consider citing our work.
@misc{kim2025molllama,
title={Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model},
author={Dongki Kim and Wonbin Lee and Sung Ju Hwang},
year={2025},
eprint={2502.13449},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Acknowledgements
We appreciate LLaMA, 3D-MoLM, MoleculeSTM, Uni-Mol and SciBERT for their open-source contributions.
- Downloads last month
- 129
Model tree for DongkiKim/Mol-Llama-3.1-8B-Instruct
Base model
meta-llama/Llama-3.1-8B