Update README.md
766c9a8
verified
-
1.52 kB
initial commit
-
5.37 kB
Update README.md
-
0 Bytes
Update README.md
model.pt
Detected Pickle imports (33)
- "quanto.tensor.qtype.qtype",
- "torch.nn.modules.container.ParameterList",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LinearGLUMoELayer",
- "torch.nn.modules.sparse.Embedding",
- "torch._utils._rebuild_tensor_v2",
- "quanto.nn.qlinear.QLinear",
- "torch.nn.modules.activation.Softplus",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LinearGLUExperts",
- "torch._utils._rebuild_parameter",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.container.Sequential",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEModel",
- "torch.int8",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaRMSNorm",
- "torch.Size",
- "torch.nn.modules.activation.Tanh",
- "torch.nn.modules.activation.SiLU",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEDecoderLayer",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.TopKBalancedNoisyGate",
- "__builtin__.set",
- "collections.OrderedDict",
- "torch.BFloat16Storage",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaAttention",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.UniversalCalculator",
- "torch.nn.modules.activation.Softmax",
- "torch.bfloat16",
- "torch.FloatStorage",
- "torch.distributions.normal.Normal",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaRotaryEmbedding",
- "torch.device",
- "transformers.generation.configuration_utils.GenerationConfig",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.configuration_llama_moe.LlamaMoEConfig",
- "transformers_modules.llama-moe.LLaMA-MoE-v1-3_5B-4_16.a9d7c3dbcf76616240a40b03cc26c55d0af63195.modeling_llama_moe_hf.LlamaMoEForCausalLM"
How to fix it?
13.6 GB
Upload folder using huggingface_hub (#1)
-
1.03 kB
Upload folder using huggingface_hub (#1)
-
414 Bytes
Upload folder using huggingface_hub (#1)
-
1.84 MB
Upload folder using huggingface_hub (#1)
-
500 kB
Upload folder using huggingface_hub (#1)
-
1.01 kB
Upload folder using huggingface_hub (#1)