The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
ManyPeptidesMD Dataset
Welcome to the ManyPeptidesMD dataset!
This dataset was generated as part of the work Amortized Sampling with Transferable Normalizing Flows.
ManyPeptidesMD is a collection of molecular dynamics (MD) trajectories for 21,700 randomly sampled peptide sequences, generated to support research in molecular simulation, machine learning for molecular dynamics, and unnormalized density sampling.
The length distribution of the dataset is as follows:
Sequence length | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|
Training | 200 | 1,000 | 1,500 | 2,000 | 3,000 | 4,000 | 10,000 |
Validation | 10 | --- | 10 | --- | --- | --- | 10 |
Testing | 30 | --- | 30 | --- | --- | --- | 30 |
Usage
The easiest way to use this dataset is to work from the accompanying codebase.
In this codebase the training webdataset is streamed and cached, and the evaluation data is downloaded automatically.
Data Organization
Full Trajectories
Location:
trajectories/
Sampling rate:
- Training: Positions and velocities saved every 1 ps
- Validation & Testing: Positions and velocities saved every 10 ps
Trajectory length:
- Training: 200 ns
- Validation & Testing: 5 μs
PDB Files
- Location:
pdb_tarfiles/
- Note: Due to Hugging Face repo limits these are provided as
.tar
files for each subset.
- Note: Due to Hugging Face repo limits these are provided as
Additional Formats
For ease-of-use we additionally provide:
Webdataset
- Path:
webdatasets/single_frames/
- Format: Each
.tar
contains 4 randomly selected position frames per sequence, from a 10ps/frame subsample of the full training trajectories. The samples are preshuffled within each.tar
. The sample filenames are formatted as{SEQUENCE}_{TIME}.bin
where{TIME}
is the time in picoseconds from the original trajectory. - Coming soon: Webdataset for pairs of samples (e.g a 10ps interval) and chunks of trajectory sample.
- Path:
Subsampled Validation & Test Sets
- Path:
trajectories_subsampled/
- Description: 500 ps downsample of validation/testing trajectories, giving sets of size 10,000 (as used in the paper’s evaluation).
- Additional info: Includes TICA projection data computed from full 10 ps interval trajectories for metric calculation.
- Path:
Simulation Details
All simulations were performed using OpenMM with the following configuration:
forcefield = openmm.app.ForceField("amber14-all.xml", "implicit/obc1.xml")
nonbondedMethod = openmm.app.CutoffNonPeriodic
nonbondedCutoff = 2.0 * openmm.unit.nanometer
temperature = 310 # Kelvin
# Initialize forcefield system
system = forcefield.createSystem(
self.pdb_dict[sequence].topology,
nonbondedMethod=nonbondedMethod,
nonbondedCutoff=nonbondedCutoff,
constraints=None,
)
# Initialize integrator
integrator = openmm.LangevinMiddleIntegrator(
temperature * openmm.unit.kelvin,
0.3 / openmm.unit.picosecond,
1.0 * openmm.unit.femtosecond,
)
Citation
If you use this dataset please cite our work Amortized Sampling with Transferable Normalizing Flows.
Acknowledgement
We greatly thank Hugging Face for hosting this large dataset!
- Downloads last month
- 55,012