Update README.md
b41a221
verified
-
1.52 kB
initial commit
-
5.38 kB
Upload folder using huggingface_hub (#1)
-
0 Bytes
Update README.md
model.pt
Detected Pickle imports (25)
- "torch.int8",
- "transformers.generation.configuration_utils.GenerationConfig",
- "quanto.tensor.qtype.qtype",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.container.ModuleList",
- "transformers.activations.FastGELUActivation",
- "collections.OrderedDict",
- "torch.nn.modules.sparse.Embedding",
- "__builtin__.set",
- "quanto.nn.qlinear.QLinear",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXForCausalLM",
- "torch.nn.modules.dropout.Dropout",
- "torch.BoolStorage",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXLayer",
- "torch.BFloat16Storage",
- "transformers.models.gpt_neox.configuration_gpt_neox.GPTNeoXConfig",
- "torch.device",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXModel",
- "torch.FloatStorage",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXSdpaAttention",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.bfloat16",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXRotaryEmbedding",
- "transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXMLP"
How to fix it?
17.7 GB
Upload folder using huggingface_hub (#1)
-
1.03 kB
Upload folder using huggingface_hub (#1)
-
414 Bytes
Upload folder using huggingface_hub (#1)
-
3.3 MB
Upload folder using huggingface_hub (#1)
-
1.09 MB
Upload folder using huggingface_hub (#1)
-
1.41 kB
Upload folder using huggingface_hub (#1)