The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Super-resolution of Velocity Fields in Three-dimensional Fluid Dynamics
This dataset loader attempts to reproduce the data of Wang et al. (2024)'s experiments on Super-resolution of 3D Turbulence.
References:
- Wang et al. (2024): "Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution"
Usage
For a given configuration (e.g. large_50
):
>>> ds = datasets.load_dataset("dl2-g32/jhtdb", name="large_50")
>>> ds
DatasetDict({
train: Dataset({
features: ['lrs', 'hr'],
num_rows: 40
})
validation: Dataset({
features: ['lrs', 'hr'],
num_rows: 5
})
test: Dataset({
features: ['lrs', 'hr'],
num_rows: 5
})
})
Each split contains the input lrs
which corresponds on a sequence of low resolution samples from time t - ws/2, ..., t, ... ts + ws/2
(ws = window size) and hr
corresponds to the high resolution sample at time t
. All the parameters per data point are specified in the corresponding metadata_*.csv
.
Specifically, for the default configuration, for each datapoint we have 3
low resolution samples and 1
high resolution sample. Each of the former have shapes (3, 16, 16, 16)
and the latter has shape (3, 64, 64, 64)
.
Replication
This dataset is entirely generated by scripts/generate.py
and each configuration is fully specified in their corresponding scripts/*.yaml
.
Usage
python -m scripts.generate --config scripts/small_100.yaml --token edu.jhu.pha.turbulence.testing-201311
This will create two folders on datasets/jhtdb
:
- A
tmp
folder that will store all samples accross runs to serve as a cache. - The corresponding subset,
small_50
for example. This folder will contain ametadata_*.csv
and data*.zip
for each split.
Note:
- For the small variants, the default token is enough, but for the large variants a token has to be requested. More details here.
- For reference, the
large_100
takes ~15 minutes to generate for a total of ~300MB.
- Downloads last month
- 49