File size: 7,450 Bytes
8660f0c
cc63030
8660f0c
 
ba3aa73
 
 
 
 
 
 
 
 
 
 
 
c72624b
13b6b9f
8660f0c
 
cc63030
7042140
 
 
 
 
8660f0c
7042140
8660f0c
4862e1b
 
 
 
8660f0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7042140
 
8660f0c
 
 
 
 
7042140
8660f0c
63f628e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8660f0c
 
 
ba3aa73
 
 
 
 
8660f0c
 
 
 
7675e40
8660f0c
 
 
 
7675e40
8660f0c
 
7675e40
8660f0c
1ea1710
7675e40
1ea1710
 
 
 
 
8660f0c
 
 
 
 
7675e40
8660f0c
 
7675e40
8660f0c
 
 
 
 
 
 
 
c6f3caa
654a621
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63f628e
654a621
 
ab9cce6
c6f3caa
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
pretty_name: Wind Tunnel 20K Dataset
size_categories:
- 10K<n<100K
task_categories:
- feature-extraction
- graph-ml
- image-to-3d
language:
- en
tags:
- simulation
- openfoam
- physics
- windtunnel
- inductiva
- machine learning
- synthetic
---

# Wind Tunnel 20K Dataset
The Wind Tunnel Dataset contains 20,000 [OpenFOAM](https://www.openfoam.com/) simulations of 1,000 unique automobile-like objects placed in a virtual wind tunnel. 
Each object is simulated under 20 distinct conditions: 4 random wind speeds ranging from 10 to 50 m/s, and 5 rotation angles (0°, 180° and 3 random angles).
To ensure stable and reliable results, each simulation runs for 300 iterations.
The meshes for these automobile-like objects were generated using the [Instant Mesh model](https://github.com/TencentARC/InstantMesh) and sourced from the [Stanford Cars Dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset).
The entire dataset of 20,000 simulations is organized into three subsets: 70% for training, 20% for validation, and 10% for testing.

The data generation process itself was orchestrated using the [Inductiva API](https://inductiva.ai/), which allowed us to run hundreds of OpenFOAM simulations in parallel on the cloud. 

<p align="center">
  <img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/example.png", width="500px">
</p>

### Dataset Structure
```
data
├── train
│   ├── <SIMULATION_ID>
│   │   ├── input_mesh.obj
│   │   ├── openfoam_mesh.obj
│   │   ├── pressure_field_mesh.vtk
│   │   ├── simulation_metadata.json
│   │   └── streamlines_mesh.ply
│   └── ...
├── validation
│   └── ...
└── test
    └── ...
```

### Dataset Files
Each simulation in the Wind Tunnel Dataset is accompanied by several key files that provide both input and output data. 
Here’s a breakdown of the files included in each simulation:

- **input_mesh.obj**: OBJ file with the input mesh.
- **openfoam_mesh.obj**: OBJ file with the OpenFOAM mesh.
- **pressure_field_mesh.vtk**: VTK file with the pressure field data.
- **streamlines_mesh.ply**: PLY file with the streamlines.
- **metadata.json**: JSON with metadata about the input parameters and about some output results such as the force coefficients (obtained via simulation) and the path of the output files.

<details> 
  <summary> <b>Examples:</b></summary>

  input_mesh.obj
  <p align="center">
    <img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/assets/input_mesh.png", width="500px">
  </p>

  openfoam_mesh.obj
  <p align="center">
    <img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/assets/openfoam_mesh.png", width="500px">
  </p>

  pressure_field_mesh.vtk
  <p align="center">
    <img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/assets/pressure_field_mesh.png", width="500px">
  </p>

  streamlines_mesh.ply
    <img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/assets/streamlines_mesh.png", width="500px">

  
  metadata.json
  ```json
    {
      "id": "1w63au1gpxgyn9kun5q9r7eqa",
      "object_file": "object_24.obj",
      "wind_speed": 35,
      "rotate_angle": 332,
      "num_iterations": 300,
      "resolution": 5,
      "drag_coefficient": 0.8322182,
      "moment_coefficient": 0.3425206,
      "lift_coefficient": 0.1824983,
      "front_lift_coefficient": 0.4337698,
      "rear_lift_coefficient": -0.2512715,
      "input_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/input_mesh.obj",
      "openfoam_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/openfoam_mesh.obj",
      "pressure_field_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/pressure_field_mesh.vtk",
      "streamlines_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/streamlines_mesh.ply"
  }
  ```

</details>



## Downloading the Dataset:

To download the dataset you have to install the [Datasets package](https://huggingface.co/docs/datasets/en/index) by HuggingFace:

```python
pip install datasets
```

### 1. Using snapshot_download()

```python
import huggingface_hub

dataset_name = "inductiva/windtunnel"

# Download the entire dataset
huggingface_hub.snapshot_download(repo_id=dataset_name, repo_type="dataset")

# Download to a specific local directory
huggingface_hub.snapshot_download(repo_id=dataset_name, repo_type="dataset", local_dir="local_folder")

# Download only the simulation metadata across all simulations
huggingface_hub.snapshot_download(
    repo_id=dataset_name,
    repo_type="dataset",
    local_dir="local_folder",
    allow_patterns=["*/*/*/simulation_metadata.json"]
)
```

### 2. Using load_dataset()

```python
import datasets

# Load the dataset (streaming is supported)
dataset = datasets.load_dataset("inductiva/windtunnel", streaming=False)

# Display dataset information
print(dataset)

# Access a sample from the training set
sample = dataset["train"][0]
print("Sample from training set:", sample)
```

## Generating the meshes

Existing object datasets have many limitations: they are either small in size, closed source, or have low quality meshes. 
Hence, we decided to generate our own dataset using the [InstantMesh](https://github.com/TencentARC/InstantMesh) model, which is open-source (Apache-2.0) and is currently state-of-the-art in image-to-mesh generation. By leveraging it we were able to generate a large number of good quality open-source automobile meshes.

The automobile-like meshes were generated by running the image-to-mesh model [InstantMesh](https://github.com/TencentARC/InstantMesh) on 1k images from the publicly available (Apache-2.0) [Stanford Cars Dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset) consisting of 16,185 images of automobiles.


Naturally, running the image-to-mesh model leads to meshes that may have certain defects, such as irregular surfaces, asymmetry issues and disconnected components. Therefore, after running the image-to-mesh model, we run a custom post-processing step where we try to improve the meshes quality. We used PCA to align the mesh with the main axis and we removed disconnected components. 

The resulting set of meshes still have little defects, such as presence of "spikes" or "cavities" in supposedly flat areas and asymmetric shapes, among others. We consider these little defects as valuable features of the dataset not as issues, since from the point of view of the learning problem, they bring challenges to the model that we believe will contribute to obtaining more robust and generalizable models. 


If you detect any clearly problematic mesh, please let us know so we can correct that issue for the next version of the Windtunnel-20k dataset.

Note: the code used to generate the meshes and postprocess them is available on github: [https://github.com/inductiva/datasets-generation](https://github.com/inductiva/datasets-generation)


## What's next?
If you have any issues using this dataset, feel free to reach out to us at [support@intuctiva.ai](support@intuctiva.ai)

To learn more about how we created this dataset—or how you can generate synthetic datasets for Physics-AI models—visit [Inductiva.AI](inductiva.ai) or check out our blog post on [transforming complex simulation workflows into easy-to-use Python classes](https://inductiva.ai/blog/article/transform-complex-simulations).