File size: 20,606 Bytes
d5fa9ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
02/07/2024 20:07:36 - INFO - __main__ -   device: cuda, n_gpu: 1
02/07/2024 20:07:39 - INFO - __main__ -   +------------------------------------------------------------+--------------+----------+
| Layer Name                                                 | Output Shape |  Param # |
+------------------------------------------------------------+--------------+----------+
| encoder.embeddings.word_embeddings.weight                  | [51451, 768] | 39514368 |
| encoder.embeddings.position_embeddings.weight              |  [1026, 768] |   787968 |
| encoder.embeddings.token_type_embeddings.weight            |    [10, 768] |     7680 |
| encoder.embeddings.LayerNorm.weight                        |        [768] |      768 |
| encoder.embeddings.LayerNorm.bias                          |        [768] |      768 |
| encoder.encoder.layer.0.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.0.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.0.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.0.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.0.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.0.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.0.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.0.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.0.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.0.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.0.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.0.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.0.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.0.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.0.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.0.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.1.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.1.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.1.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.1.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.1.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.1.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.1.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.1.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.1.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.1.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.1.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.1.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.1.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.1.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.1.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.1.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.2.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.2.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.2.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.2.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.2.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.2.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.2.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.2.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.2.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.2.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.2.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.2.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.2.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.2.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.2.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.2.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.3.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.3.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.3.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.3.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.3.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.3.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.3.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.3.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.3.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.3.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.3.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.3.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.3.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.3.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.3.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.3.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.4.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.4.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.4.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.4.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.4.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.4.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.4.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.4.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.4.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.4.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.4.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.4.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.4.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.4.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.4.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.4.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.5.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.5.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.5.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.5.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.5.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.5.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.5.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.5.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.5.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.5.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.5.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.5.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.5.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.5.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.5.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.5.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.6.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.6.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.6.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.6.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.6.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.6.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.6.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.6.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.6.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.6.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.6.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.6.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.6.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.6.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.6.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.6.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.7.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.7.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.7.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.7.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.7.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.7.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.7.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.7.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.7.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.7.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.7.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.7.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.7.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.7.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.7.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.7.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.8.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.8.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.8.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.8.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.8.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.8.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.8.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.8.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.8.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.8.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.8.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.8.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.8.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.8.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.8.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.8.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.9.attention.self.query.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.9.attention.self.query.bias          |        [768] |      768 |
| encoder.encoder.layer.9.attention.self.key.weight          |   [768, 768] |   589824 |
| encoder.encoder.layer.9.attention.self.key.bias            |        [768] |      768 |
| encoder.encoder.layer.9.attention.self.value.weight        |   [768, 768] |   589824 |
| encoder.encoder.layer.9.attention.self.value.bias          |        [768] |      768 |
| encoder.encoder.layer.9.attention.output.dense.weight      |   [768, 768] |   589824 |
| encoder.encoder.layer.9.attention.output.dense.bias        |        [768] |      768 |
| encoder.encoder.layer.9.attention.output.LayerNorm.weight  |        [768] |      768 |
| encoder.encoder.layer.9.attention.output.LayerNorm.bias    |        [768] |      768 |
| encoder.encoder.layer.9.intermediate.dense.weight          |  [3072, 768] |  2359296 |
| encoder.encoder.layer.9.intermediate.dense.bias            |       [3072] |     3072 |
| encoder.encoder.layer.9.output.dense.weight                |  [768, 3072] |  2359296 |
| encoder.encoder.layer.9.output.dense.bias                  |        [768] |      768 |
| encoder.encoder.layer.9.output.LayerNorm.weight            |        [768] |      768 |
| encoder.encoder.layer.9.output.LayerNorm.bias              |        [768] |      768 |
| encoder.encoder.layer.10.attention.self.query.weight       |   [768, 768] |   589824 |
| encoder.encoder.layer.10.attention.self.query.bias         |        [768] |      768 |
| encoder.encoder.layer.10.attention.self.key.weight         |   [768, 768] |   589824 |
| encoder.encoder.layer.10.attention.self.key.bias           |        [768] |      768 |
| encoder.encoder.layer.10.attention.self.value.weight       |   [768, 768] |   589824 |
| encoder.encoder.layer.10.attention.self.value.bias         |        [768] |      768 |
| encoder.encoder.layer.10.attention.output.dense.weight     |   [768, 768] |   589824 |
| encoder.encoder.layer.10.attention.output.dense.bias       |        [768] |      768 |
| encoder.encoder.layer.10.attention.output.LayerNorm.weight |        [768] |      768 |
| encoder.encoder.layer.10.attention.output.LayerNorm.bias   |        [768] |      768 |
| encoder.encoder.layer.10.intermediate.dense.weight         |  [3072, 768] |  2359296 |
| encoder.encoder.layer.10.intermediate.dense.bias           |       [3072] |     3072 |
| encoder.encoder.layer.10.output.dense.weight               |  [768, 3072] |  2359296 |
| encoder.encoder.layer.10.output.dense.bias                 |        [768] |      768 |
| encoder.encoder.layer.10.output.LayerNorm.weight           |        [768] |      768 |
| encoder.encoder.layer.10.output.LayerNorm.bias             |        [768] |      768 |
| encoder.encoder.layer.11.attention.self.query.weight       |   [768, 768] |   589824 |
| encoder.encoder.layer.11.attention.self.query.bias         |        [768] |      768 |
| encoder.encoder.layer.11.attention.self.key.weight         |   [768, 768] |   589824 |
| encoder.encoder.layer.11.attention.self.key.bias           |        [768] |      768 |
| encoder.encoder.layer.11.attention.self.value.weight       |   [768, 768] |   589824 |
| encoder.encoder.layer.11.attention.self.value.bias         |        [768] |      768 |
| encoder.encoder.layer.11.attention.output.dense.weight     |   [768, 768] |   589824 |
| encoder.encoder.layer.11.attention.output.dense.bias       |        [768] |      768 |
| encoder.encoder.layer.11.attention.output.LayerNorm.weight |        [768] |      768 |
| encoder.encoder.layer.11.attention.output.LayerNorm.bias   |        [768] |      768 |
| encoder.encoder.layer.11.intermediate.dense.weight         |  [3072, 768] |  2359296 |
| encoder.encoder.layer.11.intermediate.dense.bias           |       [3072] |     3072 |
| encoder.encoder.layer.11.output.dense.weight               |  [768, 3072] |  2359296 |
| encoder.encoder.layer.11.output.dense.bias                 |        [768] |      768 |
| encoder.encoder.layer.11.output.LayerNorm.weight           |        [768] |      768 |
| encoder.encoder.layer.11.output.LayerNorm.bias             |        [768] |      768 |
| encoder.pooler.dense.weight                                |   [768, 768] |   589824 |
| encoder.pooler.dense.bias                                  |        [768] |      768 |
+------------------------------------------------------------+--------------+----------+
02/07/2024 20:07:39 - INFO - __main__ -   Training/evaluation parameters Namespace(agg_way='avg', aug_type_way='random_replace_type', code_length=256, codebase_file='dataset/python/codebase.jsonl', config_name='DeepSoftwareAnalytics/CoCoSoDa', couninue_pre_train_data_files=['dataset/ruby/train.jsonl', 'dataset/java/train.jsonl'], data_aug_type='random_mask', data_flow_length=0, debug=False, device=device(type='cuda'), do_avg=False, do_continue_pre_trained=False, do_eval=False, do_fine_tune=False, do_ineer_loss=False, do_multi_lang_continue_pre_train=False, do_single_lang_continue_pre_train=False, do_test=True, do_train=False, do_whitening=False, do_zero_short=True, epoch=50, eval_batch_size=128, eval_data_file='dataset/java/valid.jsonl', eval_frequency=100, fp16=False, gradient_accumulation_steps=1, hidden_size=768, lang='python', learning_rate=2e-05, loaded_codebert_model_filename=None, loaded_model_filename=None, local_rank=-1, logging_steps=50, max_codeblock_num=10, max_grad_norm=1.0, max_steps=100, mlm_probability=0.1, mlp=False, moco_dim=768, moco_k=1024, moco_m=0.999, moco_t=0.07, moco_type='encoder_queue', model_name_or_path='DeepSoftwareAnalytics/CoCoSoDa', model_type='base', n_debug_samples=100, n_gpu=1, nl_length=128, num_train_epochs=4, num_warmup_steps=0, only_save_the_nl_code_vec=False, output_dir='./saved_models/zero-shot/python', print_align_unif_loss=False, save_evaluation_reuslt=False, save_evaluation_reuslt_dir=None, save_steps=50, seed=123456, test_data_file='dataset/python/test.jsonl', time_score=1, tokenizer_name='DeepSoftwareAnalytics/CoCoSoDa', train_batch_size=4, train_data_file='dataset/java/train.jsonl', use_best_mrr_model=False, weight_decay=0.01)
02/07/2024 20:07:41 - INFO - __main__ -   runnning test
02/07/2024 20:08:21 - INFO - __main__ -   ***** Running evaluation on python *****
02/07/2024 20:08:21 - INFO - __main__ -     Num queries = 14918
02/07/2024 20:08:21 - INFO - __main__ -     Num codes = 43827
02/07/2024 20:08:21 - INFO - __main__ -     Batch size = 128
02/07/2024 20:12:35 - INFO - __main__ -   ***** Eval test results *****
02/07/2024 20:12:35 - INFO - __main__ -     R@1 = 0.589
02/07/2024 20:12:35 - INFO - __main__ -     R@10 = 0.889
02/07/2024 20:12:35 - INFO - __main__ -     R@5 = 0.828
02/07/2024 20:12:35 - INFO - __main__ -     eval_mrr = 0.696
02/07/2024 20:12:35 - INFO - utils -   saved dataset in ./saved_models/zero-shot/python/result.jsonl