|
--- |
|
language: |
|
- en |
|
dataset_info: |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
dtype: string |
|
- name: label |
|
dtype: int64 |
|
- name: description |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 705885259.25 |
|
num_examples: 66166 |
|
- name: valid |
|
num_bytes: 100589192.25 |
|
num_examples: 9486 |
|
- name: test |
|
num_bytes: 100021131.0 |
|
num_examples: 9480 |
|
download_size: 866619578 |
|
dataset_size: 906495582.5 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: valid |
|
path: data/valid-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
# Dataset Card for ChemQA |
|
|
|
Introducing ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning. This work is inspired by IsoBench[1] and ChemLLMBench[2]. |
|
|
|
|
|
## Content |
|
|
|
There are 5 QA Tasks in total: |
|
* Counting Numbers of Carbons and Hydrogens in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets. |
|
* Calculating Molecular Weights in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets. |
|
* Name Conversion: From SMILES to IUPAC: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets. |
|
* Molecule Captioning and Editing: inspired by [2], adapted from dataset provided in [3], following the same training, validation and evaluation splits. |
|
* Retro-synthesis Planning: inspired by [2], adapted from dataset provided in [4], following the same training, validation and evaluation splits. |
|
|
|
## Load the Dataset |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset_train = load_dataset('shangzhu/ChemQA', split='train') |
|
dataset_valid = load_dataset('shangzhu/ChemQA', split='valid') |
|
dataset_test = load_dataset('shangzhu/ChemQA', split='test') |
|
``` |
|
|
|
## Reference |
|
|
|
[1] Fu, D., Khalighinejad, G., Liu, O., Dhingra, B., Yogatama, D., Jia, R., & Neiswanger, W. (2024). IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations. |
|
|
|
[2] Guo, T., Guo, kehan, Nan, B., Liang, Z., Guo, Z., Chawla, N., Wiest, O., & Zhang, X. (2023). What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems (Vol. 36, pp. 59662–59688). |
|
|
|
[3] Edwards, C., Lai, T., Ros, K., Honke, G., Cho, K., & Ji, H. (2022). Translation between Molecules and Natural Language. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 375–413. |
|
|
|
[4] Irwin, R., Dimitriadis, S., He, J., & Bjerrum, E. J. (2022). Chemformer: a pre-trained transformer for computational chemistry. Machine Learning: Science and Technology, 3(1), 15022. |
|
|
|
|
|
|
|
## Citation |
|
|
|
```BibTeX |
|
@misc{chemQA2024, |
|
title={ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning}, |
|
author={Shang Zhu and Xuefeng Liu and Ghazal Khalighinejad}, |
|
year={2024}, |
|
publisher={Hugging Face}, |
|
howpublished={\url{https://huggingface.co/datasets/shangzhu/ChemQA}}, |
|
} |
|
``` |
|
|
|
|
|
## Contact |
|
|
|
shangzhu@umich.edu |