Update README.md
5a960be
verified
-
1.52 kB
initial commit
-
5.31 kB
Update README.md
-
0 Bytes
Update README.md
model.pt
Detected Pickle imports (25)
- "torch.bfloat16",
- "collections.OrderedDict",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.blocks.MPTBlock",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.configuration_mpt.MPTConfig",
- "torch._utils._rebuild_parameter",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.modeling_mpt.MPTForCausalLM",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.ffn.MPTMLP",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.norm.LPLayerNorm",
- "__builtin__.set",
- "torch.device",
- "torch.int8",
- "quanto.nn.qlinear.QLinear",
- "torch._C._nn.gelu",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.attention.scaled_multihead_dot_product_attention",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.modeling_mpt.MPTModel",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.attention.MultiheadAttention",
- "transformers.generation.configuration_utils.GenerationConfig",
- "quanto.tensor.qtype.qtype",
- "torch.BFloat16Storage",
- "transformers_modules.vinai.PhoGPT-4B-Chat.116013fa63f8c4025739487e1cbff65b7375bbe2.custom_embedding.SharedEmbedding",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.container.ModuleList",
- "functools.partial",
- "torch.FloatStorage"
How to fix it?
7.38 GB
Upload folder using huggingface_hub (#1)
-
1.02 kB
Upload folder using huggingface_hub (#1)
-
552 Bytes
Upload folder using huggingface_hub (#1)
-
844 kB
Upload folder using huggingface_hub (#1)
-
1.57 kB
Upload folder using huggingface_hub (#1)