Shourya Bose
commited on
Commit
·
9b93f60
1
Parent(s):
927a2f5
add example
Browse files- README.md +17 -8
- example.py → example_dataset.py +100 -14
README.md
CHANGED
@@ -1,22 +1,23 @@
|
|
1 |
---
|
2 |
license: cc
|
3 |
---
|
4 |
-
|
5 |
## Illinois building energy consumption
|
6 |
|
7 |
-
This repository contains two datasets of 592 Illinois buildings each, one being more heterogenous than the other. The data is sourced from the [NREL ComStock](https://comstock.nrel.gov/) model/dataset.
|
8 |
|
9 |
## Usage
|
10 |
|
|
|
|
|
11 |
The file `custom_dataset.py` contains the function `get_data_and_generate_train_val_test_sets`, which takes in three arguments as follows:
|
12 |
|
13 |
- `data_array`: This takes in a `np.ndarray` of shape `(num_buildings, time_points, num_features)` (note that for our experiments, `num_features` is fixed to 8). You can load them from the `.npz` files provided in this repository as `data_array = np.load(./IllinoisHeterogenous.npz)['data']`.
|
14 |
- `split_ratios`: A list of positives that sum upto 1, denoting the split (along the time axis) into train-validation-test sets. For example, `split_ratios = [0.8,0.1,0.1]`. Must sum to 1.
|
15 |
-
- `dataset_kwargs`: Additional kwargs for configuring data. For example, `dataset_kwargs = { 'lookback':96, 'lookahead':4, 'normalize':True, 'transformer':True }`.
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
|
21 |
The outputs are as follows:
|
22 |
|
@@ -26,10 +27,18 @@ The outputs are as follows:
|
|
26 |
- `mean`: `np.ndarray` containing the featurewise mean. If `normalization` is `False`, then it defaults to all `0`s.
|
27 |
- `std`: `np.ndarray` containing the featurewise standard deviation. If `normalization` is `False`, then it defaults to all `1`s.
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
## ComStock Notice
|
30 |
|
31 |
This data includes information from the ComStock™ dataset developed by the National Renewable Energy Laboratory (NREL) with funding from the U.S. Department of Energy (DOE). This model was trained using ComStock release 2023.2. NREL regularly publishes updated datasets which generally improve the representation of building energy consumption. Users interested in training their own models should review the latest dataset releases to assess whether recent updates offer features relevant to their modeling objectives.
|
32 |
|
33 |
**Suggested Citation:**
|
34 |
|
35 |
-
Parker, Andrew, et al. 2023. ComStock Reference Documentation. Golden, CO: National Renewable Energy Laboratory. NREL/TP-5500-83819. https://www.nrel.gov/docs/fy23osti/83819.pdf
|
|
|
1 |
---
|
2 |
license: cc
|
3 |
---
|
|
|
4 |
## Illinois building energy consumption
|
5 |
|
6 |
+
This repository contains two datasets of 592 Illinois buildings each, one being more heterogenous than the other. The data is sourced from the [NREL ComStock](https://comstock.nrel.gov/) model/dataset.
|
7 |
|
8 |
## Usage
|
9 |
|
10 |
+
**Multivariate dataset**
|
11 |
+
|
12 |
The file `custom_dataset.py` contains the function `get_data_and_generate_train_val_test_sets`, which takes in three arguments as follows:
|
13 |
|
14 |
- `data_array`: This takes in a `np.ndarray` of shape `(num_buildings, time_points, num_features)` (note that for our experiments, `num_features` is fixed to 8). You can load them from the `.npz` files provided in this repository as `data_array = np.load(./IllinoisHeterogenous.npz)['data']`.
|
15 |
- `split_ratios`: A list of positives that sum upto 1, denoting the split (along the time axis) into train-validation-test sets. For example, `split_ratios = [0.8,0.1,0.1]`. Must sum to 1.
|
16 |
+
- `dataset_kwargs`: Additional kwargs for configuring data. For example, `dataset_kwargs = { 'lookback':96, 'lookahead':4, 'normalize':True, 'transformer':True }`.
|
17 |
+
- 'lookback` is the number of previous points fed as input. Also denoted by L.
|
18 |
+
- `lookahead` is the number of points ahead to predict. Also denoted by T.
|
19 |
+
- `normalize` (boolean): If set to `True`, data is normalized per-feature.
|
20 |
+
- `transformer` (boolean): If set to `True` and `normalize` is also `True`, then categorical time features are not normalized. Useful for embedding said features in Transformers.
|
21 |
|
22 |
The outputs are as follows:
|
23 |
|
|
|
27 |
- `mean`: `np.ndarray` containing the featurewise mean. If `normalization` is `False`, then it defaults to all `0`s.
|
28 |
- `std`: `np.ndarray` containing the featurewise standard deviation. If `normalization` is `False`, then it defaults to all `1`s.
|
29 |
|
30 |
+
**Univariate dataset**
|
31 |
+
|
32 |
+
The file `custom_dataset_univariate.py` is used in the same way as `custom_dataset.py`, except `dataset_kwargs` does not take in a key called `transformer` (it is deprecated) since there are no categorical features to normalize.
|
33 |
+
|
34 |
+
## Usage and Example
|
35 |
+
|
36 |
+
This repository only requires `numpy` and `torch` packages to run. For details on elements of each dataset, plus its different configurations, one is encouraged to run `example_dataset.py`.
|
37 |
+
|
38 |
## ComStock Notice
|
39 |
|
40 |
This data includes information from the ComStock™ dataset developed by the National Renewable Energy Laboratory (NREL) with funding from the U.S. Department of Energy (DOE). This model was trained using ComStock release 2023.2. NREL regularly publishes updated datasets which generally improve the representation of building energy consumption. Users interested in training their own models should review the latest dataset releases to assess whether recent updates offer features relevant to their modeling objectives.
|
41 |
|
42 |
**Suggested Citation:**
|
43 |
|
44 |
+
Parker, Andrew, et al. 2023. ComStock Reference Documentation. Golden, CO: National Renewable Energy Laboratory. NREL/TP-5500-83819. https://www.nrel.gov/docs/fy23osti/83819.pdf
|
example.py → example_dataset.py
RENAMED
@@ -6,13 +6,19 @@ import numpy as np
|
|
6 |
from custom_dataset import get_data_and_generate_train_val_test_sets as multivariate_dataset
|
7 |
from custom_dataset_univariate import get_data_and_generate_train_val_test_sets as univariate_dataset
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
# generate train-val-test datasets
|
10 |
-
# CASE 1: multivariate, with the time indices also normalized
|
11 |
train_1, val_1, test_1, mean_1, std_1 = multivariate_dataset(
|
12 |
-
data_array=
|
13 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
14 |
dataset_kwargs={
|
15 |
-
'num_bldg':
|
16 |
'lookback': 512,
|
17 |
'lookahead': 48,
|
18 |
'normalize': True,
|
@@ -20,12 +26,12 @@ train_1, val_1, test_1, mean_1, std_1 = multivariate_dataset(
|
|
20 |
'transformer': False # time indices are not normalized - use in non-Transformer scenarios where index embedding is not needed
|
21 |
}
|
22 |
)
|
23 |
-
# CASE 2: multivariate, with the time indices not normalized
|
24 |
train_2, val_2, test_2, mean_2, std_2 = multivariate_dataset(
|
25 |
-
data_array=
|
26 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
27 |
dataset_kwargs={
|
28 |
-
'num_bldg':
|
29 |
'lookback': 512,
|
30 |
'lookahead': 48,
|
31 |
'normalize': True,
|
@@ -33,12 +39,50 @@ train_2, val_2, test_2, mean_2, std_2 = multivariate_dataset(
|
|
33 |
'transformer': True # time indices are normalized - use in Transformer scenarios where index is embedded
|
34 |
}
|
35 |
)
|
36 |
-
# CASE 3: univariate
|
37 |
train_3, val_3, test_3, mean_3, std_3 = univariate_dataset(
|
38 |
-
data_array=
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
40 |
dataset_kwargs={
|
41 |
-
'num_bldg':
|
42 |
'lookback': 512,
|
43 |
'lookahead': 48,
|
44 |
'normalize': True,
|
@@ -53,27 +97,69 @@ if __name__ == "__main__":
|
|
53 |
dl_1 = torch.utils.data.DataLoader(train_1, batch_size=32, shuffle=False)
|
54 |
dl_2 = torch.utils.data.DataLoader(train_2, batch_size=32, shuffle=False)
|
55 |
dl_3 = torch.utils.data.DataLoader(train_3, batch_size=32, shuffle=False)
|
|
|
|
|
|
|
56 |
|
57 |
# print out of the shapes of elements in the first dataloader
|
58 |
for inp, label, future_time in dl_1:
|
59 |
-
print("Case 1: Each dataloader contains input, label, future_time. Here time indices are normalized.")
|
60 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
61 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
62 |
-
print(f"Future time shape is (including batch size of 32): {future_time.shape}
|
|
|
|
|
|
|
63 |
break
|
64 |
|
65 |
# print out of the shapes of elements in the second dataloader
|
66 |
for inp, label, future_time in dl_2:
|
67 |
-
print("Case 2: Each dataloader contains input, label, future_time. Here time indices are not normalized to allow embedding.")
|
68 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
69 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
70 |
-
print(f"Future time shape is (including batch size of 32): {future_time.shape}
|
|
|
|
|
|
|
71 |
break
|
72 |
|
73 |
# print out of the shapes of elements in the third dataloader
|
74 |
for inp, label in dl_3:
|
75 |
-
print("Case 3: Each dataloader contains input, label.")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
77 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
|
|
|
|
78 |
break
|
79 |
|
|
|
6 |
from custom_dataset import get_data_and_generate_train_val_test_sets as multivariate_dataset
|
7 |
from custom_dataset_univariate import get_data_and_generate_train_val_test_sets as univariate_dataset
|
8 |
|
9 |
+
# names of features
|
10 |
+
feat_names = ['energy consumption (kwh)', '15-min interval of day [0..96]', 'day of week [0..6]', 'temperature (celsius)', 'windspeed (m/s)', 'floor area (ft2)', 'wall area (m2)', 'window area (m2)']
|
11 |
+
|
12 |
+
# load raw numpy data
|
13 |
+
heterogenous_data, homogenous_data = np.load('./IllinoisHeterogenous.npz')['data'], np.load('./IllinoisHomogenous.npz')['data']
|
14 |
+
|
15 |
# generate train-val-test datasets
|
16 |
+
# CASE 1: multivariate, with the time indices also normalized. heterogenous dataset
|
17 |
train_1, val_1, test_1, mean_1, std_1 = multivariate_dataset(
|
18 |
+
data_array=heterogenous_data, # choose the appropriate file - homogenous or heterogenous
|
19 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
20 |
dataset_kwargs={
|
21 |
+
'num_bldg': heterogenous_data.shape[0],
|
22 |
'lookback': 512,
|
23 |
'lookahead': 48,
|
24 |
'normalize': True,
|
|
|
26 |
'transformer': False # time indices are not normalized - use in non-Transformer scenarios where index embedding is not needed
|
27 |
}
|
28 |
)
|
29 |
+
# CASE 2: multivariate, with the time indices not normalized. heterogenous dataset
|
30 |
train_2, val_2, test_2, mean_2, std_2 = multivariate_dataset(
|
31 |
+
data_array=heterogenous_data, # choose the appropriate file - homogenous or heterogenous
|
32 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
33 |
dataset_kwargs={
|
34 |
+
'num_bldg': heterogenous_data.shape[0],
|
35 |
'lookback': 512,
|
36 |
'lookahead': 48,
|
37 |
'normalize': True,
|
|
|
39 |
'transformer': True # time indices are normalized - use in Transformer scenarios where index is embedded
|
40 |
}
|
41 |
)
|
42 |
+
# CASE 3: univariate. heterogenous dataset
|
43 |
train_3, val_3, test_3, mean_3, std_3 = univariate_dataset(
|
44 |
+
data_array=heterogenous_data, # choose the appropriate file - homogenous or heterogenous
|
45 |
+
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
46 |
+
dataset_kwargs={
|
47 |
+
'num_bldg': heterogenous_data.shape[0],
|
48 |
+
'lookback': 512,
|
49 |
+
'lookahead': 48,
|
50 |
+
'normalize': True,
|
51 |
+
'dtype': torch.float32,
|
52 |
+
}
|
53 |
+
)
|
54 |
+
# CASE 4: multivariate, with the time indices also normalized. homogenous dataset
|
55 |
+
train_4, val_4, test_4, mean_4, std_4 = multivariate_dataset(
|
56 |
+
data_array=homogenous_data, # choose the appropriate file - homogenous or heterogenous
|
57 |
+
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
58 |
+
dataset_kwargs={
|
59 |
+
'num_bldg': homogenous_data.shape[0],
|
60 |
+
'lookback': 512,
|
61 |
+
'lookahead': 48,
|
62 |
+
'normalize': True,
|
63 |
+
'dtype': torch.float32,
|
64 |
+
'transformer': False # time indices are not normalized - use in non-Transformer scenarios where index embedding is not needed
|
65 |
+
}
|
66 |
+
)
|
67 |
+
# CASE 5: multivariate, with the time indices not normalized. homogenous dataset
|
68 |
+
train_5, val_5, test_5, mean_5, std_5 = multivariate_dataset(
|
69 |
+
data_array=homogenous_data, # choose the appropriate file - homogenous or heterogenous
|
70 |
+
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
71 |
+
dataset_kwargs={
|
72 |
+
'num_bldg': homogenous_data.shape[0],
|
73 |
+
'lookback': 512,
|
74 |
+
'lookahead': 48,
|
75 |
+
'normalize': True,
|
76 |
+
'dtype': torch.float32,
|
77 |
+
'transformer': True # time indices are normalized - use in Transformer scenarios where index is embedded
|
78 |
+
}
|
79 |
+
)
|
80 |
+
# CASE 6: multivariate. heterogenous dataset
|
81 |
+
train_6, val_6, test_6, mean_6, std_6 = univariate_dataset(
|
82 |
+
data_array=homogenous_data, # choose the appropriate file - homogenous or heterogenous
|
83 |
split_ratios=[0.8,0.1,0.1], # ratios that add up to 1 - the split is made along all buildings' time axis
|
84 |
dataset_kwargs={
|
85 |
+
'num_bldg': homogenous_data.shape[0],
|
86 |
'lookback': 512,
|
87 |
'lookahead': 48,
|
88 |
'normalize': True,
|
|
|
97 |
dl_1 = torch.utils.data.DataLoader(train_1, batch_size=32, shuffle=False)
|
98 |
dl_2 = torch.utils.data.DataLoader(train_2, batch_size=32, shuffle=False)
|
99 |
dl_3 = torch.utils.data.DataLoader(train_3, batch_size=32, shuffle=False)
|
100 |
+
dl_4 = torch.utils.data.DataLoader(train_4, batch_size=32, shuffle=False)
|
101 |
+
dl_5 = torch.utils.data.DataLoader(train_5, batch_size=32, shuffle=False)
|
102 |
+
dl_6 = torch.utils.data.DataLoader(train_6, batch_size=32, shuffle=False)
|
103 |
|
104 |
# print out of the shapes of elements in the first dataloader
|
105 |
for inp, label, future_time in dl_1:
|
106 |
+
print("Case 1: Each dataloader item contains input, label, future_time. Here time indices are normalized. Dataset is IL-HET.")
|
107 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
108 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
109 |
+
print(f"Future time shape is (including batch size of 32): {future_time.shape}.\n")
|
110 |
+
for m,s,n,i in zip(mean_1.flatten().tolist(),std_1.flatten().tolist(), feat_names, range(1,len(feat_names)+1)):
|
111 |
+
print(f"Feature number: {i}, name: {n}, mean: {m}, std: {s}."+("(unnormalized)" if m==0 and s==1 else ""))
|
112 |
+
print('----------------\n')
|
113 |
break
|
114 |
|
115 |
# print out of the shapes of elements in the second dataloader
|
116 |
for inp, label, future_time in dl_2:
|
117 |
+
print("Case 2: Each dataloader item contains input, label, future_time. Here time indices are not normalized to allow embedding. Dataset is IL-HET.")
|
118 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
119 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
120 |
+
print(f"Future time shape is (including batch size of 32): {future_time.shape}.\n")
|
121 |
+
for m,s,n,i in zip(mean_2.flatten().tolist(),std_2.flatten().tolist(), feat_names, range(1,len(feat_names)+1)):
|
122 |
+
print(f"Feature number: {i}, name: {n}, mean: {m}, std: {s}."+("(unnormalized)" if m==0 and s==1 else ""))
|
123 |
+
print('----------------\n')
|
124 |
break
|
125 |
|
126 |
# print out of the shapes of elements in the third dataloader
|
127 |
for inp, label in dl_3:
|
128 |
+
print("Case 3: Each dataloader item contains input, label. Dataset is IL-HET.")
|
129 |
+
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
130 |
+
print(f"Label shape is (including batch size of 32): {label.shape}.\n")
|
131 |
+
print(f"Feature number: 1, name: {feat_names[0]}, mean: {mean_3.item()}, std: {std_3.item()}.")
|
132 |
+
print('----------------\n')
|
133 |
+
break
|
134 |
+
|
135 |
+
# print out of the shapes of elements in the first dataloader
|
136 |
+
for inp, label, future_time in dl_4:
|
137 |
+
print("Case 4: Each dataloader item contains input, label, future_time. Here time indices are normalized. Dataset is IL-HOM.")
|
138 |
+
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
139 |
+
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
140 |
+
print(f"Future time shape is (including batch size of 32): {future_time.shape}.\n")
|
141 |
+
for m,s,n,i in zip(mean_4.flatten().tolist(),std_4.flatten().tolist(), feat_names, range(1,len(feat_names)+1)):
|
142 |
+
print(f"Feature number: {i}, name: {n}, mean: {m}, std: {s}."+("(unnormalized)" if m==0 and s==1 else ""))
|
143 |
+
print('----------------\n')
|
144 |
+
break
|
145 |
+
|
146 |
+
# print out of the shapes of elements in the second dataloader
|
147 |
+
for inp, label, future_time in dl_5:
|
148 |
+
print("Case 5: Each dataloader item contains input, label, future_time. Here time indices are not normalized to allow embedding. Dataset is IL-HOM.")
|
149 |
+
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
150 |
+
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
151 |
+
print(f"Future time shape is (including batch size of 32): {future_time.shape}.\n")
|
152 |
+
for m,s,n,i in zip(mean_5.flatten().tolist(),std_5.flatten().tolist(), feat_names, range(1,len(feat_names)+1)):
|
153 |
+
print(f"Feature number: {i}, name: {n}, mean: {m}, std: {s}."+("(unnormalized)" if m==0 and s==1 else ""))
|
154 |
+
print('----------------\n')
|
155 |
+
break
|
156 |
+
|
157 |
+
# print out of the shapes of elements in the third dataloader
|
158 |
+
for inp, label in dl_6:
|
159 |
+
print("Case 6: Each dataloader item contains input, label. Dataset is IL-HOM.")
|
160 |
print(f"Input shape is (including batch size of 32): {inp.shape}.")
|
161 |
print(f"Label shape is (including batch size of 32): {label.shape}.")
|
162 |
+
print(f"Feature number: 1, name: {feat_names[0]}, mean: {mean_6.item()}, std: {std_6.item()}.")
|
163 |
+
print('----------------\n')
|
164 |
break
|
165 |
|