Datasets:
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 3068946
num_examples: 53614
- name: valid
num_bytes: 155744
num_examples: 2512
- name: test
num_bytes: 709292
num_examples: 12851
download_size: 2058102
dataset_size: 3933982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- token-classification
tags:
- chemistry
- biology
size_categories:
- 10K<n<100K
Dataset Card for Stability Stability Dataset
Dataset Summary
The Stability Stability task is to predict the concentration of protease at which a protein can retain its folded state. Protease, being integral to numerous biological processes, bears significant relevance and a profound comprehension of protein stability during protease interaction can offer immense value, especially in the creation of novel therapeutics.
Dataset Structure
Data Instances
For each instance, there is a string representing the protein sequence and a float number indicating the stability score. See the stability prediction dataset viewer to explore more examples.
{'seq':'MEHVIDNFDNIDKCLKCGKPIKVVKLKYIKKKIENIPNSHLINFKYCSKCKRENVIENL'
'label':0.17}
The average for the seq
and the label
are provided below:
Feature | Mean Count |
---|---|
seq | 45 |
label | 0.34 |
Data Fields
seq
: a string containing the protein sequencelabel
: a float number indicating the stability score of each sequence.
Data Splits
The secondary structure prediction dataset has 3 splits: train, valid, and test. Below are the statistics of the dataset.
Dataset Split | Number of Instances in Split |
---|---|
Train | 53,614 |
Valid | 2,512 |
Test | 12,851 |
Source Data
Initial Data Collection and Normalization
The dataset applied in this task is initially sourced from Rocklin et al and subsequently collected within the TAPE.
Licensing Information
The dataset is released under the Apache-2.0 License.
Citation
If you find our work useful, please consider citing the following paper:
@misc{chen2024xtrimopglm,
title={xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein},
author={Chen, Bo and Cheng, Xingyi and Li, Pan and Geng, Yangli-ao and Gong, Jing and Li, Shen and Bei, Zhilei and Tan, Xu and Wang, Boyan and Zeng, Xin and others},
year={2024},
eprint={2401.06199},
archivePrefix={arXiv},
primaryClass={cs.CL},
note={arXiv preprint arXiv:2401.06199}
}