HazeT_Hieu / model_structures.txt
datnguyentien204's picture
Upload 1403 files
83034b6 verified
Transformer Encoder Base- Content
==================================================================================================================================
Layer (type (var_name)) Input Shape Output Shape Param # Trainable
==================================================================================================================================
TransformerEncoder (TransformerEncoder) [784, 2, 512] [784, 2, 512] -- True
├─ModuleList (layers) -- -- -- True
│ └─TransformerEncoderLayer (0) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerEncoderLayer (1) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerEncoderLayer (2) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
==================================================================================================================================
Total params: 9,457,152
Trainable params: 9,457,152
Non-trainable params: 0
Total mult-adds (G): 4.94
Transformer Encoder Base: Style
==================================================================================================================================
Layer (type (var_name)) Input Shape Output Shape Param # Trainable
==================================================================================================================================
TransformerEncoder (TransformerEncoder) [784, 2, 512] [784, 2, 512] -- True
├─ModuleList (layers) -- -- -- True
│ └─TransformerEncoderLayer (0) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerEncoderLayer (1) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerEncoderLayer (2) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
==================================================================================================================================
Total params: 9,457,152
Trainable params: 9,457,152
Non-trainable params: 0
Total mult-adds (G): 4.94
==================================================================================================================================
Input size (MB): 3.21
Forward/backward pass size (MB): 134.87
Params size (MB): 25.22
Estimated Total Size (MB): 163.31
==================================================================================================================================
Transfomer Decoder
=======================================================================================================================================
Layer (type (var_name)) Input Shape Output Shape Param # Trainable
=======================================================================================================================================
TransformerDecoder (TransformerDecoder) [784, 2, 512] [1, 784, 2, 512] -- True
├─ModuleList (layers) -- -- -- True
│ └─TransformerDecoderLayer (0) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─MultiheadAttention (multihead_attn) -- [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout3) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm3) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerDecoderLayer (1) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─MultiheadAttention (multihead_attn) -- [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout3) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm3) [784, 2, 512] [784, 2, 512] 1,024 True
│ └─TransformerDecoderLayer (2) [784, 2, 512] [784, 2, 512] -- True
│ │ └─MultiheadAttention (self_attn) [784, 2, 512] [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout1) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm1) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─MultiheadAttention (multihead_attn) -- [784, 2, 512] 1,050,624 True
│ │ └─Dropout (dropout2) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm2) [784, 2, 512] [784, 2, 512] 1,024 True
│ │ └─Linear (linear1) [784, 2, 512] [784, 2, 2048] 1,050,624 True
│ │ └─Dropout (dropout) [784, 2, 2048] [784, 2, 2048] -- --
│ │ └─Linear (linear2) [784, 2, 2048] [784, 2, 512] 1,049,088 True
│ │ └─Dropout (dropout3) [784, 2, 512] [784, 2, 512] -- --
│ │ └─LayerNorm (norm3) [784, 2, 512] [784, 2, 512] 1,024 True
├─LayerNorm (norm) [784, 2, 512] [784, 2, 512] 1,024 True
=======================================================================================================================================
Total params: 12,613,120
Trainable params: 12,613,120
Non-trainable params: 0
Total mult-adds (G): 4.95
=======================================================================================================================================
Input size (MB): 6.42
Forward/backward pass size (MB): 160.56
Params size (MB): 25.24
Estimated Total Size (MB): 192.22
All Layers