windtunnel-20k / README.md
rvalerio's picture
Update README.md
7042140 verified
|
raw
history blame
2.92 kB
metadata
pretty_name: Wind Tunnel dataset
size_categories:
  - 10K<n<100K

Wind Tunnel Dataset

The Wind Tunnel Dataset contains 20,000 OpenFOAM simulations of 1,000 unique automobile-like objects placed in a virtual wind tunnel. Each object is simulated under 20 distinct conditions: 4 random wind speeds ranging from 10 to 50 m/s, and 5 rotation angles (0°, 180° and 3 random angles). To ensure stable and reliable results, each simulation runs for 300 iterations. The meshes for these automobile-like objects were generated using the Instant Mesh model and sourced from the Stanford Cars Dataset. The entire dataset of 20,000 simulations is organized into three subsets: 70% for training, 20% for validation, and 10% for testing.

The data generation process itself was orchestrated using the Inductiva API, which allowed us to run hundreds of OpenFOAM simulations in parallel on the cloud.

Dataset Structure

data
├── train
│   ├── <SIMULATION_ID>
│   │   ├── input_mesh.obj
│   │   ├── openfoam_mesh.obj
│   │   ├── pressure_field_mesh.vtk
│   │   ├── simulation_metadata.json
│   │   └── streamlines_mesh.ply
│   └── ...
├── validation
│   └── ...
└── test
    └── ...

Dataset Files

Each simulation in the Wind Tunnel Dataset is accompanied by several key files that provide both input and output data. Here’s a breakdown of the files included in each simulation:

  • input_mesh.obj: OBJ file with the input mesh.
  • openfoam_mesh.obj: OBJ file with the OpenFOAM mesh.
  • pressure_field_mesh.vtk: VTK file with the pressure field data.
  • streamlines_mesh.ply: PLY file with the streamlines.
  • metadata.json: JSON with metadata about the input parameters and about some output results such as the force coefficients (obtained via simulation) and the path of the output files.

Downloading the Dataset:

1. Using snapshot_download()

from huggingface_hub import snapshot_download

dataset_name = "inductiva/windtunnel"

# Download the entire dataset
snapshot_download(repo_id=dataset_name)

# Download to a specific local directory
snapshot_download(repo_id=dataset_name, local_dir="local_folder")

# Download only the input mesh files across all simulations
snapshot_download(allow_patterns=["*/*/*/input_mesh.obj"], repo_id=dataset_name)

2. Using load_dataset()

from datasets import load_dataset

# Load the dataset (streaming is supported)
dataset = load_dataset("inductiva/windtunnel", streaming=False)

# Display dataset information
print(dataset)

# Access a sample from the training set
sample = dataset["train"][0]
print("Sample from training set:", sample)