File size: 4,787 Bytes
7900c16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
import argparse
import collections
import torch


parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--input_model_path", type=str, default="models/input_model.bin",
                    help=".")
parser.add_argument("--output_model_path", type=str, default="models/output_model.bin",
                    help=".")

args = parser.parse_args()

input_model = torch.load(args.input_model_path)

output_model = collections.OrderedDict()

output_model["albert.embeddings.word_embeddings.weight"] = \
    input_model["embedding.word.embedding.weight"]
output_model["albert.embeddings.position_embeddings.weight"] = \
    input_model["embedding.pos.embedding.weight"]
output_model["albert.embeddings.token_type_embeddings.weight"] = \
    input_model["embedding.seg.embedding.weight"][1:, :]
output_model["albert.embeddings.LayerNorm.weight"] = \
    input_model["embedding.layer_norm.gamma"]
output_model["albert.embeddings.LayerNorm.bias"] = \
    input_model["embedding.layer_norm.beta"]

output_model["albert.encoder.embedding_hidden_mapping_in.weight"] = input_model["encoder.linear.weight"]
output_model["albert.encoder.embedding_hidden_mapping_in.bias"] = input_model["encoder.linear.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.weight"] = \
    input_model["encoder.transformer.layer_norm_2.gamma"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.bias"] = \
    input_model["encoder.transformer.layer_norm_2.beta"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight"] = \
    input_model["encoder.transformer.self_attn.linear_layers.0.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias"] = \
    input_model["encoder.transformer.self_attn.linear_layers.0.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight"] = \
    input_model["encoder.transformer.self_attn.linear_layers.1.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias"] = \
    input_model["encoder.transformer.self_attn.linear_layers.1.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight"] = \
    input_model["encoder.transformer.self_attn.linear_layers.2.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias"] = \
    input_model["encoder.transformer.self_attn.linear_layers.2.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight"] = \
    input_model["encoder.transformer.self_attn.final_linear.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.bias"] = \
    input_model["encoder.transformer.self_attn.final_linear.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.weight"] = \
    input_model["encoder.transformer.layer_norm_1.gamma"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.bias"] = \
    input_model["encoder.transformer.layer_norm_1.beta"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.ffn.weight"] = \
    input_model["encoder.transformer.feed_forward.linear_1.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.ffn.bias"] = \
    input_model["encoder.transformer.feed_forward.linear_1.bias"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.weight"] = \
    input_model["encoder.transformer.feed_forward.linear_2.weight"]
output_model["albert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.bias"] = \
    input_model["encoder.transformer.feed_forward.linear_2.bias"]

output_model["albert.pooler.weight"] = input_model["target.sp.linear_1.weight"]
output_model["albert.pooler.bias"] = input_model["target.sp.linear_1.bias"]
output_model["sop_classifier.classifier.weight"] = input_model["target.sp.linear_2.weight"]
output_model["sop_classifier.classifier.bias"] = input_model["target.sp.linear_2.bias"]
output_model["predictions.dense.weight"] = input_model["target.mlm.linear_1.weight"]
output_model["predictions.dense.bias"] = input_model["target.mlm.linear_1.bias"]
output_model["predictions.LayerNorm.weight"] = input_model["target.mlm.layer_norm.gamma"]
output_model["predictions.LayerNorm.bias"] = input_model["target.mlm.layer_norm.beta"]
output_model["predictions.decoder.weight"] = input_model["target.mlm.linear_2.weight"]
output_model["predictions.decoder.bias"] = input_model["target.mlm.linear_2.bias"]
output_model["predictions.bias"] = input_model["target.mlm.linear_2.bias"]

torch.save(output_model, args.output_model_path)